The DIVA Model

DIVA (Directions Into Velocities of Articulators) is a neural network model of speech motor skill acquisition and speech production. In computer simulations, the model learns to control the movements of a computer-simulated vocal tract in order to produce speech sounds. The model’s neural mappings are tuned during a babbling phase in which auditory feedback from self-generated speech sounds is used to learn the relationship between motor actions and their acoustic and somatosensory consequences. After learning, the model can produce arbitrary combinations of speech sounds, even in the presence of constraints on the articulators.

DIVA provides unified explanations for a number of long-studied speech production phenomena including motor equivalence, contextual variability, speaking rate effects, anticipatory coarticulation, and carryover coarticulation. The model is schematized in the figure below. Each block in the diagram corresponds to a hypothesized set of neurons in the human speech system.


Figure 1: Schematic of the DIVA Model

The most recent version of the DIVA model is detailed in Guenther, Ghosh, and Tourville (2006) Brain and Language. A less technical description is available in Guenther (2006) Journal of Communication Disorders. The model’s accounts for a large number of speech production phenomena are provided in Guenther, Hampson, and Johnson (1998) Psychological Review and Guenther (1995) Psychological Review.