Note the 11:00 a.m. start time for this talk.
The classical paradigm of generative modeling, as the problem of reproducing the training data distribution, becomes less relevant for many applications, including drug discovery and text-to-image generation. In practice, generative models demonstrate the best performance when tailored to specific needs at inference time. I will present two novel approaches that allow the control of the distributions of generated samples at inference time. Superposition of Diffusion Models (SuperDiff) combines pretrained diffusion models to sample from a mixture of distributions (logical OR) or to generate samples that are likely under all models (logical AND). SuperDiff leverages a new scalable ItĂ´ density estimator for the log-likelihood of the diffusion SDE, which incurs no additional overhead compared to the well-known Hutchinson’s estimator needed for divergence calculations. In the second half of the talk, I will present an efficient and principled method for sampling from a sequence of annealed, geometric-averaged, or product distributions derived from pretrained score-based models. We derive a weighted simulation scheme, which we call Feynman-Kac Correctors (FKCs), based on the celebrated Feynman-Kac formula by carefully accounting for terms in the appropriate partial differential equations (PDEs). Finally, I’ll demonstrate different applications of these methods, varying from the classical image generation tasks to molecule design and sampling from Boltzmann densities.
Refreshments will be served preceding the talk, starting at 10:45.