# Quantifying similarity and variability in artificial and biological neural networks (Alex Williams -- NYU, Flatiron)

**Starts:**4:00 pm on Thursday, April 6, 2023

**Location:**CDS, 665 Comm Ave (Room 365)

Quantifying similarity between neural representationsâ€”e.g. hidden layer activation vectorsâ€”is a perennial problem in deep learning and neuroscience research. Many existing measures of network dissimilarity have theoretical shortcomings (e.g. fail to satisfy the triangle inequality) and fail to consider the structure of "noise" in neural responses. In recent and ongoing work, we are building upon ideas from statistical shape analysis to address these challenges. Specifically, we have established formal metric spaces satisfying the triangle inequality and developed rigorous statistical estimation procedures that account for "noise" in the neural responses. We have used these methods to systematically analyze large collections of neural networks, enabling us to predict a network's characteristics (e.g. performance on a task) from its position in representational shape space. Overall, these tools unlock a range of new research possibilities ranging from animal-to-animal variability in neural circuits to the consistency and identifiability of artificial neural network models.