Normalization effects on single layer neural networks and related asymptotic expansions (Joy Yu - Boston University)

Neural networks have received a lot of attention in recent years due to their success in many diverse applications. A deep understanding of their mathematical properties becomes more and more important. We consider single hidden layer neural networks and characterize their performance when trained with stochastic gradient descent as the number of hidden units and gradient descent steps grow to infinity. In particular, we investigate the effect of different scaling schemes, which lead to different normalizations of the neural network, on the network's statistical output, closing the gap between the $1/\sqrt{N}$ and the mean-field $1/N$ normalization. We develop an asymptotic expansion for the neural network's statistical output with respect to the scaling parameter as the number of hidden units grows to infinity. Based on this expansion we demonstrate mathematically that there is no bias-variance trade off, in that both bias and variance decrease as the number of hidden units increases and time grows. Numerical studies show that test and train accuracy monotonically improve as the neural network's normalization gets closer to the mean field normalization.

When 4:00 pm to 5:00 pm on Thursday, November 12, 2020
Location Online (Zoom) - Email Mickey Salins (msalins@bu.edu) for more information