Lo-Bin Chang – Johns Hopkins University

Title: Tracking cross-validated estimates of prediction error as studies accumulate.Abstract: In recent years “reproducibility” has emerged as a key factor in evaluating applications of statistics to the biomedical sciences, for example learning predictors of disease phenotypes from high-throughput “omics” data. In particular, “validation” is undermined when error rates on newly acquired data are sharply higher than those originally reported. More precisely, when data are collected from m “studies” representing possibly different sub-phenotypes, more generally different mixtures of sub-phenotypes, the error rates in cross-study validation (CSV) are observed to be larger than those obtained in ordinary randomized cross-validation (RCV), although the “gap” seems to close as m increases. Whereas these findings are hardly surprising for a heterogeneous underlying population, this discrepancy is then seen as a barrier to translational research. In this talk, I will provide a statistical formulation in the large sample limit: studies themselves are modeled as components of a mixture and all error rates are optimal (Bayes) for a two-class problem. Our results cohere with the trends observed in practice and suggest what is likely to be observed with large samples and consistent density estimators, namely that the CSV error rate exceeds the RCV error rates for any m, the latter (appropriately averaged) increases with m, and both converge to the optimal rate for the whole population.

When 4:00 pm to 5:00 pm on Monday, January 26, 2015
Location MCS 148