PhD Study Group - Deep Generative Models
January 14 - February 4
Organizer(s):
- Imke Grabe, Miguel González Duque (PhD Students at Digital Design)
- René Haas (PhD student at Computer Science)
- Sami Brandt (Associate Professor at Computer Science)
Lecturers:
- Each participant is responsible for presenting one reading and leading the discussion.
Date(s) of the course: 14.01.2022, 21.01.2022, 04.02.2022
Time: 10.00-12.00
Room: 2A05
Course description:
In this study group, we plan to study the state-of-the-art of Deep Generative Modelling. More specifically, we will discuss 6 different papers (see Reading List below). Focus will be set on the technical aspects of underlying the models, comparing different generative models, and discussing their application in the participants’ respective PhD projects. The course will focus on probabilistic generative models such as variational autoencoders, implicit generative models, such as generative adversarial networks as well as autoregressive models.
We will meet in three sessions. For each session, participants prepare two readings. A reading is presented by one participant and discussed in the group. After the presentation the student will receive feedback on the presentation from the other participants. Readings are allocated in the week of the course start.
Intended learning outcomes:
- Having completed the successfully, PhD students will:
- Be able to analyze, discuss and reflect on recent publications in the field in generative modeling.
- Be able to compare different generative models and reflect on their application.
- Be able to confidently communicate recent research in the field of generative modelling to peers.
Reading list: [1] Gulrajani, Ishaan, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. “PixelVAE: A Latent Variable Model for Natural Images,” no. 701 (2017): 447–54.
[2] Hoogeboom, Emiel, Didrik Nielsen, Priyank Jaini, Patrick Forré, and Max Welling. “Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions.” ArXiv:2102.05379 [Cs, Stat], October 22, 2021. http://arxiv.org/abs/2102.05379. (Accepted at NeurIPS 2021)
[3] Esser, Patrick, Robin Rombach, and Björn Ommer. 2021. “Taming Transformers for High-Resolution Image Synthesis.” ArXiv:2012.09841 [Cs], June. http://arxiv.org/abs/2012.09841.
[4] Karras, Tero, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. 2020. “Analyzing and Improving the Image Quality of StyleGAN.” ArXiv:1912.04958 [Cs, Eess, Stat], March. http://arxiv.org/abs/1912.04958.
[5] Bau, David, Steven Liu, Tongzhou Wang, Jun-Yan Zhu, and Antonio Torralba. “Rewriting a Deep Generative Model.” ArXiv:2007.15646 [Cs], July 30, 2020. http://arxiv.org/abs/2007.15646.
[6] Sauer, Axel, and Andreas Geiger. “Counterfactual Generative Networks.” ArXiv:2101.06046 [Cs], January 15, 2021. http://arxiv.org/abs/2101.06046.
Programme: 14.01.2022: Readings [1] and [2] are presented and discussed.
21.01.2022: Readings [3] and [4] are presented and discussed.
04.02.2022: Readings [5] and [6] are presented and discussed.
More specifically, a session will follow the following plan:
Time | Program point |
10.00-10.25 | Paper presentation 1 |
10.25-10.55 | Discussion of paper 1, based on the prepared discussion points and questions |
10.55-11.00 | Small break |
11.00-11.25 | Paper presentation 2 |
11.25-11.55 | Discussion of paper 2, based on the prepared discussion points and questions |
11.55-12.00 | Wrap up of today’s papers |
Prerequisites:
Participants are expected to know the basics of at least one deep generative model (e.g. how a Generative Adversarial Network or a Variational AutoEncoder works).
Exam:
Participants will be examined based on the presentations given during the course.
Credits:
Participants receive 1.5 ECTS upon participation in all meetings and the presentation of one paper.
Amount of hours the student is expected to use on the activity:
- Participation: 6 in person
- Preparation: 35 hours reading papers and preparing slides
How to sign up: Contact Imke Grabe at imgr@itu.dk by Tuesday, 12.01.2022