How do decisions made in the brain influence behavior?
Jerry Chen’s lab is engineering new tools to visualize the brain’s processes
By Tess Joosse
At the crux of Jerry Chen’s research are some of the most elemental questions about the cognitive experience. “How do we perceive the world? How do we use that information to make decisions? We’re interested in understanding the basic functions of how we behave,” he says.
For each event perceived and decision made, there is a complex web of neural circuitry involved. Understanding how the central nervous system’s many layered parts work to create knowledge from past experiences is an overarching goal in Chen’s lab, which has published papers in Science, Nature Communications, Nature Methods, and Neuron, just to name a few.
“We’re interested in studying this question at different scales. In the brain, we’re interested in how genes give rise to molecules that define how neurons function, how the neurons themselves come together to form circuits, and how those circuits then carry out computations,” says Chen, a faculty member in both the Photonics Center and the Neurophotonics Center, College of Arts and Sciences assistant professor of biology, and affiliated College of Engineering assistant professor of biomedical engineering.
Across these many scales, Chen and members of his lab use an array of imaging techniques to get a front row seat to what’s happening within the brain. “As researchers, we like to actually be able to look at things we’re measuring and observe these processes,” he says. They use these tools to zoom in on individual molecules within a neuron, and “zoom out to look at how all these neurons are talking to each other.”
The team’s approaches aren’t just focused on spatial scales but also on temporal ones, investigating what’s happening in sections of the brain across time. Neurons communicate with one another within milliseconds, Chen explains, while processes like learning and memory occur over longer ranges of time.
And even two brain cells sitting next to each other can have vast differences. “All the neurons in our brain are not all the same. They’re very diverse, and they’re potentially carrying out a lot of distinct roles. And their roles are largely defined by the genes that are being expressed,” Chen says.
To study the relationships between gene expression, neuronal activity, and behavior, Chen’s lab created a workflow called “comprehensive readout of activity and cell type markers”, or “CRACK” for short. The experiments involve first training mice to lick a sensor when they detect a matching pair of stimuli, a task similar to the card game known as “memory”, Chen explains, in which a player flips over cards (arrayed face down on a table) one by one with the goal of matching pairs.
In the lab’s mouse version, the researchers use a rotor to brush the whiskers of a mouse either towards or away from its face. Whiskers are highly sensitive and convey important information to the animal, Chen says. “They have very fine tactile acuity with their whiskers, similar to what we have with our fingertips,” he says. The mouse is rewarded with a sip of water for not licking in response to two brushes that don’t match. “The animal has to generalize and make a rule to say, ‘These two bits of whisker stimuli were the same, or they were different,’” Chen says.
As mice perform this memory task, Chen and the members of his lab—including David Lee, a sixth-year PhD student—capture their behavior on high-speed video and use 2-photon calcium imaging to peer deep into their brains. Calcium rushes into a neuron as it fires, and 2-photon imaging allows the team to track these changes at single-cell resolution to determine where and when neurons are active.
“Then at the end of our experiments, we take the tissue out, and we find the neurons that we imaged previously in the living brain,” Chen explains, using a technique called fluorescent in situ hybridization to stain these samples for mRNA of multiple genes. Combined, the techniques paint a picture of what genes and circuits are involved in different scenarios of learning and decision making.
These detailed visualizations first piqued Lee’s interest as a new graduate student, when Chen showed them in a first-year seminar research presentation. “When I first saw videos of neurons actually lighting up while an animal was awake and behaving and thinking, I was like, ‘That’s exactly what I want to do,’” Lee says. “We’re able to see what happens as animals learn and think.”
Since then, in Chen’s lab Lee has used these techniques to study the perirhinal cortex, an area of the brain that receives information from the whisker system in mice and is linked to regions implicated in learning and memory. As the animals figure out the whisker-brush memory “game”, Lee images the same set of cells twice a day, in the end looking at how those cells changed in the process.
He’s found that as a mouse learns how to accomplish the task over time, a “reward” signal in the mouse’s brain starts to show up earlier and earlier during the whisker-brush trials. Initially, the reward signal can only be detected at the end of the memory task. But then, “as the animals begin to have enough information to know they’re going to be rewarded,” Lee says, “it begins to occur not just at the end of the trial but during the stimuli, when the animal is actually thinking about it and saying, ‘Oh, I have enough information. I can get water here.’”
Members of the Chen lab are also developing new microscopic tools to assist in their investigations of neural circuits. Xin Ye, a seventh-year PhD student in the lab, has created a microscope for viewing changes in voltage between cells. While 2-photon calcium imaging is a powerful tool for visualizing neuronal activity, calcium is still just a proxy for the change in voltage that actually indicates a neuron is firing. A tool that can image voltage directly is highly desirable, Ye says. “It’s something that scientists want.”
To design the microscope, Ye and her colleagues employed a phenomenon called temporal multiplexing, in which multiple paralleled laser beams are pulsed at delayed time intervals. “We have this [concept], but nobody has actually applied it to ultrafast imaging,” she says. To build it, they combined, modeled, and tested many different components in a long process of troubleshooting and trial-and-error. They created a bespoke setup that placed multiple laser beams in the same field of view, enabling them to scan a small area of the brain for changes in voltage.
“The concept is set at the very beginning, but the middle is more like an engineering project to put this concept into an actual microscope,” Ye explains. “It’s extremely exciting … to start building something totally new, from scratch,” she says of the project.
Chen notes that this type of voltage imaging is now more practical and feasible to do and will likely be more widely adopted across neuroscience research in the coming years. “What everybody wants to measure is voltage,” he says.
Meanwhile, Lee is developing automated methods of whisker training the mice, hoping not just to teach them the memory task but also employing new techniques to measure if and how the mice learn it. While manually training the mice, the team has observed a remarkable range of idiosyncrasies in how different mice try to chip away at the task. “Some are running around in circles, some mice are just impulsively engaging with the system, some animals appear to be more deliberately trying to pay attention to the stimulus and respond accordingly,” Chen says. “Behaviorally, you see a lot of differences that we’re trying to characterize.”
Automating the training frees up the group to study this behavioral diversity in depth, he says. “It allows us to look more broadly at the variations in individuals, and then start to hone in on what are the neuronal differences that could be causing these behavioral differences.”
These questions about variations in gene expression and circuitry are linked to why Chen chose to study the brain in the first place: “Introspection — just understanding who I am as a person,” he says. “We’re all individuals, right? Being able to look at [and compare] individual brains across scales allows us to answer questions about how we are different, and how we think differently.”