Bravi Symposium: Visually Guided Navigation and Scene Perception
Visually guided navigation and scene perception
Dec 2, 2011
Organizers: Finnegan J. Calabro and Lucia M. Vaina
The meeting will be hosted by the
Brain and Vision Research Laboratory,
Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience
Boston University
and will be held
Dec 2, 2011, 9am-1pm
64 Cummington St, Room 150
Boston University, Boston, MA
Schedule
9am. Introduction
9:10am. Detection of Moving Objects by Moving Observers: Psychophysics and Modeling
Constance Royden
Dept. of Mathematics and Computer Science
College of the Holy Cross
As a person moves through the environment, he or she must be able to detect moving objects in order to intercept of avoid them. For a moving observer, the retinal images of stationary objects move in a pattern, the optic flow field. Thus, one cannot distinguish a self-moving object from stationary objects using image motion alone. In theory one can identify moving objects if their angle or speed of motion differs from the pattern generated by the images of stationary objects. I will present results from psychophysical experiments examining whether people can use this information to identify moving objects. I also present data from a computational model that uses motion differences to locate regions of speed and angle changes to localize the borders of moving objects. Finally, I will present results that show how monocular depth cues interact with motion cues to aid in the detection of moving objects.
10am. Interaction of Visual Landmarks and Path Integration in Human Navigation
William H. Warren and Mintao Zhao
Dept. of Cognitive, Linguistic & Psychological Sciences
Brown University
Humans and other animals are able to navigate back to a known location using both visual landmarks and path integration. How does such visual and idiothetic information interact in navigation? In the present experiments, participants performed a triangle completion task on foot in a virtual environment. First, we test whether landmarks and path integration are optimally combined, in the Bayesian sense (Cheng, et al, 2007; Shettleworth & Sutton, 2005). We find that this information is integrated, but that landmarks strongly bias the homing direction, consistent with the idea that they reset the path integration system. Second, we test whether path integration serves as a navigation backup system, continuously running in the background (May & Klatzky, 2000). To the contrary, we find that path integration is “dialed down” in the presence of stable landmarks, but quickly “dialed up” when landmarks fail. Third, we ask how the navigator determines whether landmarks are stable. We find that path integration serves as a very low-resolution reference system, but its consistency with local or global landmarks helps to detect landmark instability. Thus, visual landmarks and path integration are not optimally combined, but interact in context-sensitive ways to support accurate navigation.
10:50-11:10 Break and discussion
11:10am. Property-based neural representation of scene and space
Aude Oliva
Dept. of Brain and Cognitive Sciences, Computer Science and Artificial Intelligence Laboratory
MIT
Behavioral and computational studies suggest that visual scene analysis rapidly produces a rich description of both the objects and the spatial layout of a scene. In this talk, I will describe recent findings in cognitive neuroscience showing that visual scene information is represented in a distributed manner across various brain areas.
12pm. Object motion detection by normal and visually impaired moving observers
Finnegan J. Calabro and Lucia M. Vaina
Dept. of Biomedical Engineering, Brain and Vision Research Laboratory
Boston University
The ability of moving observers to detect object motion is critical to both collision avoidance and interceptive action planning. We first present data from psychophysics and fMRI describing the mechanisms and brain networks mediating object motion detection during simulated self-motion. Psychophysically, we show retinal motion is separated into self- and object-motion components, and neurally is implemented by several interacting brain networks of cortical regions involved in visual motion processing and action-planning. Second, we present results from visually impaired stroke patients who were unable to perform the object motion task. We show that spatially co-localized, congruently moving auditory cues bound to the moving object enhanced detection of the object, both for normal observers and, importantly, the patients.
12:50pm. Wrap up