This course covers the fundamentals and applications of generative models, a branch of machine learning focused on learning unknown probability distributions from observed examples. Generative models are used to automatically generate complex data such as images, text and sound from limited user input, simulate alternative possible outcomes that are not observed in the real world, generate multiple possible predictions when the input cannot uniquely determine the output, quantify the amount of uncertainty in the model prediction and incorporate domain knowledge into otherwise uninformed domain-agnostic algorithms. Both classical approaches and modern techniques developed within the last 10 years will be covered, and their applications to different areas of artificial intelligence, such as computer vision, natural language processing and speech processing will be highlighted. The goal is to provide students with a comprehensive understanding of the latest techniques and bring them up to speed on the current scientific literature. By the end of the course, students will understand when generative models should be applied and how they can be applied in the context of their own research.
- Prescribed generative models, e.g.: latent variable models, variational autoencoders
- Implicit generative models, e.g.: generative adversarial networks and implicit maximum likelihood
- Specially parameterized generative models, e.g.: autoregressive models, flow-based models
- Applications to the generation of images, text and audio
The course grade will be based on quizzes, participation and final project.