Soumyadip Ghosh - IBM Research

Starts: 4:00 pm on Thursday, October 3, 2013
Ends: 5:00 pm on Thursday, October 3, 2013
Location: MCS 148

Title: Optimal Sampling in Stochastic Recursions. Abstract: We refer to classical iterative algorithms such as quasi-Newton recursions, trust-region methods, and fixed-point recursions as "Stochastic" recursions when they involve quantities (functions, their gradients, Hessians etc.) that can only be estimated using a simulation oracle. The primary motivating settings are the Stochastic Root Finding problem that seeks the zero for a simulation-estimated function, and the closely related Simulation Optimization problem that seeks a minima. The estimation quality of the simulation oracle depends on the effort expended in the simulation: in a typical scenario where a Central Limit Theorem applies, estimation error drops to zero at the canonical $\sqrt{n}$ rate with sample size $n$. We address the central question that arises in the practical context where the primary computational burden in the stochastic recursion is the Monte Carlo sampling procedure: how should sampling proceed within stochastic recursion iterates in order to ensure that the identified candidate solutions remain consistent to the true solution, and more importantly, when can we ensure that sampling is efficient, that is, converges at the fastest possible rate. The answer involves a trade-off between the two types of error inherent in the iterates: the deterministic error due to the recursion algorithm and the "stochastic" component due to sampling. We characterize the relationship between sample sizing and convergence rates, and demonstrate that consistency and efficiency are intimately coupled with the speed of the underlying recursion, with faster algorithms yielding a wider regime of "optimal" sampling rates.