Holden Lee, “Score-based generative modeling: Convergence theory”

/ January 23, 2023/

When:
January 24, 2023 @ 12:00 pm – 1:15 pm
2023-01-24T12:00:00-05:00
2023-01-24T13:15:00-05:00

Please join us on Tuesday, January 24, 2022 at 12:00pm in Clark Hall, Room 110 or on ZOOM for the CIS & MINDS Seminar Series:

 

 Holden Lee, PhD

Assistant Professor

Johns Hopkins University

Topic:  “Score-based generative modeling: Convergence theory”

 

In-person in Clark Hall, Room 110

OR

virtually over Zoom

Join Zoom Meeting:

https://wse.zoom.us/j/97055652302?pwd=dWFUUHRHS1lna2h5K0U1cEt4RDRrQT09

 

Abstract:  Score-based generative modeling (SGM) or diffusion generative modeling is a highly successful approach for learning a probability distribution from data and generating further samples, based on learning the score function (gradient of log-pdf) and then using it to simulate a stochastic differential equation (SDE) that transforms white noise into the data distribution. It is a core part of image generation systems such as DALL·E 2, Stable Diffusion, and Imagen.

 

I will first introduce SGM and give an overview of different types of models. Then I’ll describe the convergence theory we develop, which relies only on a L^2-accurate score estimate, applies to any distribution with bounded 2nd moment, has polynomial dependence on all parameters, and does not rely on smoothness or functional inequalities. Our analysis builds on a Girsanov analysis for SDE’s, by incorporating the high-probability smoothing effect of the forward diffusion and quantifying the effect of step size schedule.

 

Based on joint works with Hongrui Chen (Peking University), Jianfeng Lu and Yixin Tan (Duke).

https://arxiv.org/abs/2206.06227https://arxiv.org/abs/2209.12381https://arxiv.org/abs/2211.01916

 

Biography:  Holden Lee, an assistant professor in the Department of Applied Mathematics and Statistics, explores the interplay between machine learning, probability, and theoretical computer science. His research focuses on building theoretical foundations for probabilistic methods in modern machine learning with a view towards designing more efficient and reliable algorithms. This includes understanding the success and shortcomings of deep learning-based generative models, as well as proving convergence guarantees for sampling (Markov Chain Monte Carlo) algorithms, especially beyond the “log-concave” setting where classical theory applies. He has also worked on algorithms for prediction and control of dynamical systems from a learning-theoretic perspective.

Share this Post