October 13, 2017, Ananda Theertha Suresh, Google

Friday, October 13, 2017, 3pm-4pm
8 St. Mary’s Street, PHO 211
Refreshments at 2:45pm


Ananda Theertha Suresh

Communication-Efficient and Differentially-Private Distributed Learning

Motivated by the need for distributed learning and optimization algorithms with low communication cost and privacy guarantees, we study communication efficient and differentially-private algorithms for distributed mean estimation. Unlike previous works, we make no probabilistic assumptions on the data. We propose three quantization schemes one of which is optimal up to a constant in the minimax sense i.e., it achieves the best mean square error for a given communication cost.  Furthermore, we show that a modified version of the quantization schemes achieves differential privacy in addition to communication efficiency. We finally demonstrate the practicality of our algorithms by applying them to distributed gradient descent for neural networks, Lloyd’s algorithm for k-means, and power iteration for PCA.

Ananda Theertha Suresh received the B.Tech. degree from IIT Madras in 2006, and M.S. and Ph.D. degrees from University of California at San Diego in 2012 and 2016, respectively. He is currently a Research Scientist at Google, New York. His research interests lie in the intersection of statistics, machine learning, and information theory.  He is a recipient of the 2015 Neural Information Processing Systems (NIPS) Best Paper Award, 2016 UCSD ECE Department Best Thesis Award, 2017 International Conference on Machine Learning (ICML) Best Paper Honorable Mention, and 2017 Marconi Society Paul Baran Young Scholar Award.

Faculty Host: Venkatesh Saligrama
Student Host: Sean Sanchez