CISE Seminar: Nathan Kallus, Assistant Professor, Cornell University & Cornell Tech

Friday, May 13, 2022
3:00-4:00pm
Virtual Event

Smooth Contextual Bandits

Contextual bandit problems model the inherent tradeoff between exploration and exploitation in personalized decision making in marketing, healthcare, revenue management, and more. Specifically, the tradeoff is characterized by the optimal growth rate of the regret in cumulative rewards compared to the optimal policy. Naturally, the optimal rate should depend on how complex the underlying supervised learning problem is, namely how much can observing reward in one context tell us about mean rewards in another context. Curiously, this obvious-seeming relationship is obscured in current theory that separately studies the easy, fully-extrapolatable case and hard, super-local case. To characterize the relationship more precisely I study a nonparametric contextual bandit problem where expected reward functions are β-smooth (roughly meaning β-times differentiable). I will show how this interpolates between the two extremes previously studied in isolation: non-differentiable-response bandits (β ≤ 1), where rate-optimal regret is achieved by decomposing the problem into non-contextual bandits, and parametric-response bandits (β = ∞), where rate-optimal regret is often achievable without any exploration at all. We develop a novel algorithm that works for any given smoothness setting by operating neither fully locally nor fully globally. We prove its regret is rate-optimal, thereby characterizing the optimal regret rate and revealing a fuller picture of the crucial interplay between complexity and regret in dynamic decision making. Time permitting, I will also discuss how to construct valid confidence intervals from data collected by contextual bandits, a crucial challenge in the enterprise to replace randomized trials with adaptive experiments in applied fields from biostatistics to development economics.

This talk is based on the following papers: Smooth Contextual Bandits: Bridging the Parametric and Non-differentiable Regret Regimes and Post-Contextual-Bandit Inference.

Nathan Kallus is an Assistant Professor in the School of Operations Research and Information Engineering and Cornell Tech at Cornell University. Nathan’s research interests include optimization, especially under uncertainty; causal inference; sequential decision making; and algorithmic fairness. He holds a PhD in Operations Research from MIT as well as a BA in Mathematics and a BS in Computer Science from UC Berkeley. Before coming to Cornell, Nathan was a Visiting Scholar at USC’s Department of Data Sciences and Operations and a Postdoctoral Associate at MIT’s Operations Research and Statistics group.

Faculty Host: Yannis Paschalidis
Student Host: Zili Wang