Penalized Maximum Tangent Likelihood Estimation and Robust Variable Selection (Yichen Qin - University of Cincinnati)

We introduce a new class of mean regression estimators --- penalized maximum tangent likelihood estimation --- for high-dimensional regression estimation and variable selection. We first explain the motivations for the key ingredient, maximum tangent likelihood estimation (MTE), and establish its asymptotic properties. We further propose a penalized MTE for variable selection and show that it is root-n-consistent, enjoys the oracle property. The proposed class of estimators consists penalized L2 distance, penalized exponential squared loss, penalized least trimmed square and penalized least square as special cases and can be regarded as a mixture of minimum Kullback-Leibler distance estimation and minimum L2 distance estimation. Furthermore, we consider the proposed class of estimators under the high-dimensional setting when the number of variables d can grow exponentially with the sample size n, and show that the entire class of estimators (including the aforementioned special cases) can achieve the optimal rate of convergence in the order of sqrt{ln(d)/n}. Finally, simulation studies and real data analysis demonstrate the advantages of the penalized MTE.

When 4:00 pm to 5:00 pm on Thursday, March 29, 2018
Location MCS, Room 148, 111 Cummington Mall