A New SVD Approach to Optimal Topic Estimation (Tracy Ke - Harvard University)

  • Starts: 4:00 pm on Thursday, February 21, 2019
  • Ends: 5:00 pm on Thursday, February 21, 2019
In the probabilistic topic models, the quantity of interest—a low- rank matrix consisting of topic vectors—is hidden in the text corpus matrix, masked by noise, and Singular Value Decomposition (SVD) is a potentially useful tool for learning such a matrix. However, different rows and columns of the matrix are usually in very different scales and the connection between this matrix and the singular vectors of the text corpus matrix are usually complicated and hard to spell out, so how to use SVD for learning topic models faces challenges. We overcome the challenges by introducing a proper Pre-SVD normalization of the text corpus matrix and a proper column-wise scaling for the matrix of interest, and by revealing a surprising Post-SVD low-dimensional simplex structure. The simplex structure, together with the Pre-SVD normalization and column-wise scaling, allows us to conveniently reconstruct the matrix of interest, and motivates a new SVD-based approach to learning topic models. We show that under the popular probabilistic topic model (Hofmann, 1999), our method has a faster rate of convergence than exist- ing methods in a wide variety of cases. In particular, for cases where documents are long or n is much larger than p, our method achieves the optimal rate. At the heart of the proofs is a tight element-wise bound on singular vectors of a multinomially distributed data matrix, which do not exist in literature and we have to derive by ourself. We have applied our method to two data sets, Associated Process (AP) and Statistics Literature Abstract (SLA), with encouraging re- sults. In particular, there is a clear simplex structure associated with the SVD of the data matrices, which largely validates our discovery.
Location:
MCS 148, 111 Cummington Mall

Back to Calendar