Brainy but so Artificial

By Courtney Humphries • Photographs by Cydney Scott

Neuromorphics Lab researchers surround their prize robot (an iRobot Create® model), with a robotic arm that can be controlled by an EEG (electroencephalogram) cap: from left, lab director Max Versace and PhD students Ben Chandler, a lab intellectual lead and project grant writer; Byron Galbraith, a robot developer; and Sean Lorenz, a developer who works on interfacing the robot with an EEG cap.

In BU's neuromorphics lab, an interdisciplinary team of neuroscientists, biologists, engineers, computer scientists, and mathematicians are developing robots modeled on the human brain that can learn on their own, make decisions, and adapt to their environments.


M assimiliano "Max" Versace sits in a conference room at Boston University's Neuromorphics Lab headquarters. He holds one of the lab's frequent visitors—his infant son, who is looking intently at his father. Versace, who is a senior research scientist and the lab's director, says of his baby, "This is a great example of a general-purpose learning machine," and he is only half joking.

Brain on a chip: capable of translating neural models in portable, low-power hardware, this chip was designed by Assistant Professor Ajay Joshi and PhD student Schuyler Eldridge, Neuromorphics Lab affiliates from the Electrical & Computer Engineering Department at the College of Engineering.

We've just been discussing the lab's primary goal: to build an artificial intelligence that is smarter than any robot yet created. As every proud parent knows, babies have astonishing brains; they take in a wealth of information from the senses and, over time, learn how to move around, communicate, and begin to make independent decisions. Compared to a baby—or even the simplest animal—computers are sorely lacking in learning ability. Even sophisticated robots and software programs can only accomplish tasks that they're specifically programmed to do, and their ability to learn is limited by their programming. Your Roomba® may manage to clean your house with random movements, but it doesn't learn which rooms collect the most dirt or the least distracting time of day to clean.

Versace (GRS'07), calls this limited capability "special-purpose intelligence," and his group is aiming for something much more sophisticated. The Neuromorphics Lab, launched in the summer of 2010 as part of the National Science Foundation-funded Center of Excellence for Learning in Education, Science & Technology (CELEST), is pushing the boundaries of artificial intelligence by creating a new kind of computer that can sense, learn, and adapt—all the behaviors that come naturally to a living brain.

your roomba® may manage to clean your house with random movements, but it doesn’t learn which rooms collect the most dirt, or the least distracting time of day to clean.

Smart and Sophisticated

BU is renowned for its work in computational neuroscience—creating computer algorithms that describe the complex behavior of brains. The Neuromorphics Lab draws on that tradition, but is focused on turning this fundamental knowledge into real-world applications. The primary project is an ambitious program to develop what Versace refers to as a "brain on a chip." The project, dubbed MoNETA (short for Modular Neural Exploring Traveling Agent, and also the name for the Roman goddess of memory) would become the brain behind virtual and robotic agents that can learn on their own to interact with new environments, using the information they glean to make decisions and perform tasks.

About to tear off one of the "ears" (a microphone) of a robot mounted on a Netbook, seven-month-old Gabriel Versace hangs out at the Neuromorphics Lab with his father, Max, lab director.

"We want to eliminate, as much as possible, human intervention in deciding what the robot does," Versace says. This is a tall order, which is why the lab is breaking down aspects of behavior, tackling them one at a time.

To demonstrate this idea, Anatoli Gorchetchnikov (GRS'05), a research assistant professor who is leading the MoNETA project, points to a projected screen in the conference room that shows a classic psychological experiment called the Morris water maze. A cartoon depicts the position of a rat that is dropped in a round pool of water. Rats can swim but they don't like to—the animal explores the pool until it finds a partially submerged platform that it can stand on. On subsequent trials, it remembers the location of the platform and finds it much more quickly.

In this case, however, there is no real rat: instead, it's a computer program designed to mimic a rat's behavior. But rather than being programmed with the explicit task of finding the platform, this program has a series of motivations: a lack of comfort when in water motivates it to find solid ground, for instance, while a "curiosity drive" compels it to search nearby places it hasn't been before. The idea is to create algorithms that produce lifelike behavior without explicitly telling the program what to do.

Other lab members are addressing different aspects of brain function. Gennady Livitz (GRS'11), who recently earned his PhD in Cognitive & Neural Systems, is working with postdoc Jasmin Léveillé (GRS'10) on the visual systems of MoNETA—how it will interpret what it sees—and implementing those systems in simple robots. Others are working on how it will sense sounds in its environment, and how it will make decisions.

“We want to eliminate, as much as possible, human intervention in deciding  what the robot does.”   —Max Versace

Wired for Brain Power

Modeling the complexities of the brain is only the first task. Versace and his colleagues believe that a lifelike artificial brain would require innovations in both the software and the hardware that houses it. While some lab members are creating computer models of the brain, the group is also working in partnership with Hewlett-Packard to develop the operating system for such a brain called Cog Ex Machina, or Cog. This software will run on an innovative type of electrical component just a few atoms wide, called a memristor, created by HP.

Ben Chandler, a PhD candidate in Cognitive & Neural Systems, explains that a new kind of hardware is necessary to overcome fundamental physical limits in what current computer chips can accomplish. A key difference between the way brains are wired and the way computers are wired is that computers store information in a separate place from where they process it: when they perform a calculation, they retrieve the necessary information from memory, perform the processing task, and then store the result in another location. Brain cells, however, manage to do all of this at the same time and location, making transfer of information from cell to cell much faster and more efficient.

The robot (above, left), an iRobot Create® model similar to a Roomba® stripped of its vacuum-cleaning capabilities (near foreground), is used to perform spatial navigation tasks—pictured here "learning," based on its experience, to navigate the red and green obstacles. Researchers, from left, are postdoc Florian Raudies, who works on neural models of vision, and Research Assistant Professor Anatoli Gorchetchnikov, MoNETA project leader and a senior researcher in the Neuromorphics Lab. In the background are high school interns Samuel Kim (at left) and Vincent Kee. Postdoc Jasmin Léveillé (above, right) designs the visual systems of virtual and robotic animats, or artificial animals, using a similar environment to that of video games to train artificial brains before deploying them in robots.

Another key difference is power. For all its tremendous activity, the human brain runs on the equivalent of a 20-watt lightbulb. If the goal is to create a free-moving machine with an intelligence on par with even a small mammal, it can't involve large, power-guzzling supercomputers. Such a machine must have a "brain" that is dense, compact, and requires little power. Memristors, Versace says, allow hardware designers to build chips with unprecedented density that operate at very low power.

“we are a bridge between neuroscience and engineering. we are fluent in both languages. we can talk neurotransmitters and molecules with biologists, and electronics and transistors with engineers.” —max versace“we are a bridge between neuroscience and engineering. we are fluent in both languages. we can talk neurotransmitters and molecules with biologists, and electronics and transistors with engineers.” —Max Versace

New Connections

Because the lab's work requires applying a deep understanding of the brain to the practical problems of software development, and then integrating that software into computer chips and eventually robotic vehicles and devices, it is highly interdisciplinary. "We are a bridge between neuroscience and engineering," Versace says. "We are fluent in both languages. We can talk neurotransmitters and molecules with biologists, and electronics and transistors with engineers." Lab members come from a wide range of backgrounds, some bringing knowledge in neuroscience, psychology, and biology, and others in computer science, engineering, and math. To thrive here, however, they need to feel comfortable working at both ends of the bridge.

PhD student Sean Lorenz works on interfacing the robot with an EEG cap. The target application will be medical, particularly for robot devices that will enable people with disabilities to interact with the world by means of noninvasive brain/machine interfaces.

The laboratory's staff composition signals a focus on the future: newer faculty members and graduate students spearhead projects, without the traditional hierarchies of an academic lab. "It's a brand-new field and it's wide open," says Chandler. "For anyone who has the interest and the talent, there's an opportunity to move in." Chandler personifies this point; one of the lab's cofounders, he has taken a leading role in the partnership between the lab and HP, while still managing to make progress on his graduate thesis.

Usually academic labs make theoretical advances and publish scientific papers, but transferring this work to the real world requires a different approach. Heather Ames (GRS'09), a postdoctoral fellow in Cognitive & Neural Systems and one of the Neuromorphics Lab's founding members, is leading an outreach effort to engage industry in the lab's work. She and her colleagues believe that such partnerships with industry are crucial to keep these ideas from languishing in a lab. Versace says that the ultimate goal is to "take neuroscience out of the lab" and turn theory into reality. ■

Boston University's leadership in the field of neural and computational science has entered a new phase, with the chartering of the Center for Computational Neuroscience & Neural Technology. Barbara Shinn-Cunningham, professor of biomedical engineering at the College of Engineering, is director and Nancy Kopell, professor of mathematics and statistics at the College of Arts & Sciences, is co-director of the new research center, known as CompNet and located at 677 Beacon Street.

  • When Robots Fly

    Interdisciplinary research team models flying robots on birds, bats, and insects.

  • The Rest Is Poetry

    Language professors discover rare Spanish and Mandinka poetry manuscripts in unlikely places.

  • Voracious Scholar

    Nicole Bhatia (CAS'13) spent her summer immersed in editorial work at BU's Elie Wiesel Center for Judaic Studies.

  • Hormone Hunter

    For nearly 20 years, anthropologist Cheryl Knott has spent her summers in the rain forests of Borneo, studying the endangered orangutans living there and the impact that a fluctuating food supply has on their fertility.

Arts & Sciences Magazine home