Research Spotlight Archive
Title: Plastic Neuromorphic Hardware for Autonomous Navigation in Mobile Robots
Funding: This work is supported in part by the Center of Excellence for Learning in Education, Science and Technology (CELEST), a National Science Foundation Science of Learning Center (SBE-0354378 and OMA-0835976).
Background: Mobile land and aerial robots collect large quantities of data generated by sensors, but the processing, evaluation, and analysis of all this data is restricted by either the limited bandwidth available to broadcast this data to offline computing resources or the limited computing power on the mobile robot. In contrast, biological organisms solve this problem very efficiently by making use of low-power, compact “wetware” to process large amounts of sensory information while interfacing with a complex, rapidly changing world. To reduce the computational load of analyzing sensory data on mobile robots we implement biologically-inspired algorithms in customized hardware to meet computation time, power, and weight constraints that are not possible to achieve with general purpose hardware.
Description: The goal of this project is to develop and translate adaptive neural models into custom neuromorphic hardware for autonomous learning in sensory, motivational, planning and reinforcement circuits in mobile robots. In particular, we focus on vision-based navigation in mobile robots. Data from a video stream obtained from passive image sensors attached to a mobile robot provide rich information about the robot’s environment and movement. Optic flow-based models can be used to extract the key pieces of information needed for robot navigation. We develop biologically-inspired algorithms for the computation of optic flow from video data, the extraction of information about robot movement and environment from computed optic flow, and the integration of this extracted information into a reinforcement learning strategy to train for obstacle avoidance and thus, enable collision-free navigation in small cluttered environments. We are currently testing our algorithms using Field Programmable Gate Array (FPGA) to provide the flexibility to make changes to the algorithms. The refined algorithms will ultimately be mapped to mixed-signal application specific integrated circuit (ASIC) chips, providing the necessary power and area savings for mobile robotic applications. The long term goal is to develop and test full-custom neuromorphic hardware that adapts its behavior to different environments during robot navigation. This would expand the repertoires of brain areas and related behaviors that can be efficiently translated in low-power, compact hardware to power the next generation of adaptive mobile robots.
Results: So far, the project focused on the extraction of optic flow from image streams in textured indoor environments such as the one shown in Figure 1. Optic flow, the movement of distinct objects and features in an image stream, is detected by using computations derived from basic biological visual processing mechanisms. This flow is visualized in Figure 2 given the image sequence from Figure 1. Hue (see Figure 3) encodes the direction of motion and saturation of the speed of motion. Initially detected flow will be integrated temporally and spatially, and environment models will be used to extract a state variable for a reinforcement learner from the processed flow.
The task of the reinforcement learning module is to progressively discourage motor behaviors that lead to impact with objects in the environment and promote behaviors that don’t. Simulations in a virtual environment use the expected flow to extract state variables for reinforcement learning. The successful learning shows a decrease in collisions per epoch from approximately 15 to two during the first 500 epochs. (see Figure 4).