Roundtable - Ping Li
- 1:30 pm on Thursday, March 7, 2013
- 3:30 pm on Thursday, March 7, 2013
Probabilistic hashing methods typically transform a challenging (or infeasible) massive data computational problem into a probability and statistical estimation problem. For example, fitting a logistic regression (or SVM) model on a dataset with billion observations and billion (or billion square) variables would be difficult. Searching for similar documents (or images) in a repository of billion web pages (or images) is another challenging example. In certain important applications in the search industry, a web page is often represented as a binary (0/1) vector in billion square (2 to power 64) dimensions. For those data, both data reduction (i.e., reducing number of nonzero entries) and dimensionality reduction are crucial for achieving efficient search and statistical learning. In this discussion, I will start with a short introduction to a series of recent work on probabilistic hashing for binary data, for example, the NIPS'12 paper by P. Li, A. Owen, C-H Zhang, "One Permutation Hashing".