View All Stories


View All News


Neuroscientist Tyler Perrachione, who studies speech and language, is starting a new research project—decoding what the human brain is hearing when the person is listening to speech. The Sargent College of Health & Rehabilitation Sciences assistant professor of speech, language, and hearing sciences recently walked out of his office, crossed Commonwealth Avenue, and entered the Rajen Kilachand Center for Integrated Life Sciences & Engineering’s Cognitive Neuroimaging Center (CNC). There he spent two hours scanning a human subject’s brain for the study using BU’s new Siemens Prisma 3 Tesla MRI machine.

It was the first time Perrachione, director of the Communication Neuroscience Research Laboratory, didn’t have to wait for the bus or call an Uber to schlep across the river to MIT’s brain imaging center to do his scanning.

“Everything went swimmingly and the computers are churning away at the data,” says Perrachione, who was part of a small team of BU researchers who worked closely on the CNC with Payette, the Boston architectural firm that designed the Kilachand Center, at 610 Commonwealth Avenue.

Perrachione’s enthusiasm is being echoed by other BU researchers who have been using the CNC’s MRI scanner, on the center’s first floor, since October 2017. “Everyone’s been thrilled with the quality of the data they’re getting,” says CNC associate director Jay Bohland. “In addition to the scanner itself, we’ve worked hard to put together the devices and tools necessary to support modern cognitive neuroscience, which depends on high-quality, precisely controlled stimulation and response recording.”

MRI scan brain images showing parts of the brain activated by hearing human speech.

Images of the human brain showing which parts are activated when a person is listening to words, from the scan of a human subject’s brain that neuroscientist Tyler Perrachione ran using the 3 Tesla MRI machine at BU’s new Cognitive Neuroimaging Center. The subject was listening to lists of monosyllabic words such as “boot,” “toad,” “deck,” and “give.” Perrachione is collecting the data as part of his project to “decode” how the brain recognizes the same word spoken by different people. His question: “How do brains know that words are the same, even when every person’s speech has a unique sound?” Images courtesy of Perrachione

The CNC is available to neuroscientists across the University, on both the Charles River Campus and the Medical Campus. “We’re excited to welcome researchers who have been using imaging facilities at other Boston sites and new users who are interested in expanding their research to include neuroimaging methods,” says CNC director Chantal Stern, a College of Arts & Sciences professor of psychological and brain sciences. Stern is the principal investigator of a $1.6 million National Science Foundation grant that supports the scanner.

Bohland says the CNC team will work with investigators at the new brain imaging suite so that they can “become comfortable with the equipment, center capabilities, and quality of data obtained before moving funded studies to the facility.”

Sam Ling, a CAS assistant professor of psychological and brain sciences, has been using the scanner to carry out research funded by the National Institutes of Health aimed at “understanding how our brains process what we see and the neural computations that allow us to alter that processing to cater to our moment-to-moment thoughts and desires,” Ling says.

“Our lab has totally embraced the scanner,” he says. “The quality of the data we’ve been acquiring is outstanding. We’re already starting to write up manuscripts based on some of the results.”

Perrachione explains why the CNC’s scanner is an amazing tool for his research. “First, it has a great signal-to-noise ratio, meaning that we can see the living, thinking brain in unprecedented detail,” he says. “Second, it has a new technology called simultaneous multislice imaging, which is a fancy way of saying we can take an fMRI picture really, really fast—up to four times faster than we used to. This means we can do scans that are quieter and get a lot more data in the same amount of time, which is really important when you’re studying speech and hearing. Getting a lot more data in each scan means we can better understand how individual brains are working, rather than having to get an average of a lot of brains. This will help us develop a more personalized and sensitive understanding of brain function and its relationship to behavior and health.

“But what’s really great is not just the scanner, but the whole research environment it’s the anchor for,” says Perrachione, whose research is funded by the NIH National Institute on Deafness and Other Communication Disorders and the Brain and Behavior Research Foundation. “The imaging center is a collaborative community to foster new discoveries and new science, not just the device that takes the pictures, exciting as that is.”