- All Categories
- Featured Events
- Alumni
- Application Deadline
- Arts
- Campus Discourse
- Careers
- BU Central
- Center for the Humanities
- Charity & Volunteering
- Kilachand Center
- Commencement
- Conferences & Workshops
- Diversity & Inclusion
- Examinations
- Food & Beverage
- Global
- Health & Wellbeing
- Keyword Initiative
- Lectures
- LAW Community
- LGBTQIA+
- Meetings
- Orientation
- Other Events
- Religious Services & Activities
- Special Interest to Women
- Sports & Recreation
- Social Events
- Study Abroad
- Weeks of Welcome
- Nomination Deadline: Undergraduate Academic Advising AwardsAll day
- Labor of Luxury: Embroidery from India to the World11:00 am
- MechE Spring Seminar Series: Chiara Bellini11:00 am
- Richard Raiselis: Landscapes Near Me11:00 am
- Speaking in Code: Youth Poetry and the Politics of Possibility in Urban Uganda12:00 pm
- Phenomenology Master-Class with James Kinkaid (Bilkent University)1:00 pm
- Good Night, and Good Luck: Film and Discussion2:30 pm
- Greening in the Garden: How Translation Transforms English2:30 pm
- CISE Seminar: Jonathan Weare, Professor of Mathematics, Courant Institute of Mathematical Sciences, New York University3:00 pm
- BU Hillel: Shabbat Services5:00 pm
- BU Hillel: Shabbat Dinner6:00 pm
- Senior Acting Thesis: Pod 17:30 pm
CISE Seminar: Jonathan Weare, Professor of Mathematics, Courant Institute of Mathematical Sciences, New York University
Convergence of Unadjusted Langevin and HMC in High Dimensions: Delocalization of Bias
The unadjusted Langevin algorithm is commonly used to sample probability distributions in extremely high-dimensional settings. However, existing analyses of the algorithm for strongly log-concave distributions suggest that, as the dimension d of the problem increases, the number of iterations required to ensure convergence within a desired error in the W2 metric scales in proportion to d or its square root. In this paper, we argue that, despite this poor scaling of the W2 error for the full set of variables, the behavior for a small number of variables can be significantly better: a number of iterations proportional to K, up to logarithmic terms in d, often suffices for the algorithm to converge to within a desired W2 error for all K-marginals. We refer to this effect as delocalization of bias. We show that the delocalization effect does not hold universally and prove its validity for Gaussian distributions and strongly log-concave distributions with certain sparse interactions. Our analysis relies on a novel W2,ℓ∞ metric to measure convergence. A key technical challenge we address is the lack of a one-step contraction property in this metric. Our results cover both the underdamped and overdamped Langevin schemes as well as an unadjusted version of the popular Hybrid (or Hamiltonian) Monte Carlo algorithm.
Jonathan Weare is a professor of mathematics in the Courant Institute of Mathematical Sciences at New York University. Previously he was an associate professor in the statistics department and in the James Franck Institute at the University of Chicago and, before that, an assistant professor in the mathematics department there. Before moving to Chicago, Weare was a Courant Instructor of mathematics at NYU and a PhD student in mathematics at the University of California at Berkeley.
Faculty Host: Konstantinos Spiliopoulos
Student Host: Chae Woo Lim
| When | 3:00 pm - 4:00 pm on 23 January 2026 |
|---|---|
| Building | 665 Commonwealth Ave., CDS 1101 |