Machines That Can Multitask
Imagine a world in which each person could only perform one task. One excels at sensing other people’s facial expressions, another can assist in a specific kind of surgery, and another can recognize and avoid objects. No one can adapt and learn the tasks that other people know. Despite the hype that’s surrounded artificial intelligence over the years—the promise that robots and computers could do all the things humans and other animals do—artificial systems are more like this scenario of vastly limited abilities.
Designing a robot that can sense, learn, make decisions, and move on its own is the ambitious goal of a project led by Massimiliano Versace, head of the Neuromorphics Lab, which is part of the multi-institution Center of Excellence for Learning in Education, Science & Technology (CELEST), funded by the National Science Foundation and hosted at Boston University. Most robots, he explains, are designed for one particular problem.
“We’re not interested in special-purpose intelligence,” says Versace. The group’s leading project, Modular Neural Exploring Traveling Agent (MoNETA)—co-funded by Hewlett-Packard (HP) and CELEST—is a comprehensive software program referred to as a “brain on a chip.” MoNETA, as Versace and his team envision it, would eventually be an autonomous robot that can sense its surroundings, identify important information, and use that information to make decisions and perform tasks. The group’s research was recently published in IEEE Spectrum as the cover article, written by team member Ben Chandler.
The Neuromorphics Lab is developing brain models—biologically inspired algorithms that mimic the way brains work—and, in collaboration with HP, the operating system that will run on the chip. The hardware is based on an innovative type of electrical component only a few atoms wide, called a memristor. Versace explains that this technology will enable more lifelike intelligence by processing information faster and more efficiently, similar to how neurons in the brain work. Memristors are used to simulate the billions of synapses found in biological brains. With respect to current technology, they allow hardware designers to build chips with unprecedented density and to operate at very low power, critical design requirements for the brain of a free-moving machine.
This is a tall order, so the lab is breaking down aspects of behavior and tackling them one piece at a time, creating computer models and programming simple robots to simulate ways the brain can learn and adapt. For instance, Anatoli Gorchetchnikov, research assistant professor of cognitive and neural systems, is creating computer algorithms that simulate how a rat can learn to find its way to a platform in a pool of water. Schuyler Eldridge, a PhD student in electrical and computer engineering, is working on programs for decision-making processes based on visual information, while Sean Patrick, a PhD student in cognitive and neural systems, is using knowledge about how the brain operates to enable robots to think as flexibly as animals do.
The long-term goal is to create an artificial intelligence that can think for itself, Versace says. “There is no a priori knowledge; they have to adapt and learn in the way they interact with the environment.”