# Lee Jones - UMass Lowell

**Starts:**4:00 pm on Thursday, October 14, 2010

**Ends:**5:00 pm on Thursday, October 14, 2010

**Location:**MCS 149

TITLE: A developing paradigm for the mathematics of learning with applications and open problems for individualized cancer diagnosis.
ABSTRACT: New mathematics of pattern recognition and learning is relying less on
asymptotics and more on a general minimax theory involving the triad of a. estimation algorithms, b. models, and c. expected loss. In game theoretic terms the user chooses an algorithm while Nature picks a model. Recent results using the geometry of l-p and reproducing kernel Hilbert space relate these two choices to expected loss. In the field of compressed sensing there are several recent inequalities for expected squared error of a model estimator with minimum l-1 norm subject to
l-2 or l-infinity constraints assuming Nature has chosen a sparse linear model. Other inequalities for expected loss exist in local minimax learning for kernel machines where the loss is squared estimation error of an unknown function at a
fixed point x_0, the estimator is locally affine in the observations, while the model
is a ball, truncated cone or difference of truncated cones in reproducing
kernel Hilbert space. We demonstrate benefits of some theorems on expected
loss in this case with the notion of confident predictability applied to
individualized cancer diagnosis using microarrays. Several open problems remain
for the local minimax case;to name a few, theorems on expected squared error
at xo for kernel models with estimation algorithms based on global fitting using
hinge loss (the Support Vector Machine) , or for machines consisting of locally
affine estimators with models consisting of a single hidden layer neural net with n
nodes.