Learning to Sample Better (Eric Vanden-Eijnden -- NYU)

  • Starts: 4:00 pm on Thursday, November 3, 2022
Sampling high-dimensional probability distributions is a common task in computational chemistry, Bayesian inference, etc. Markov Chain Monte Carlo (MCMC) is the method of choice to perform these calculations, but it is often plagued by slow convergence properties. I will discuss how methods from deep learning (DL) can help enhance the performance of MCMC via a feedback loop in which we simultaneously use DL to learn better samplers based e.g. on generative models such as normalizing flows, and MCMC to obtain the data for the training of these models. I will draw connections between these methods and score-based diffusion models that have proven successful for image generation. I will also illustrate these techniques via several examples, including the calculation of free energies and Bayes factors.
Location:
MCS B31, 111 Cummington Mall; Refreshments in MCS B24