Visually guided navigation and scene perception
Dec 2, 2011
Organizers: Finnegan J. Calabro and Lucia M. Vaina
The meeting will be hosted by the
Brain and Vision Research Laboratory,
Departments of Biomedical Engineering, Neurology and the Graduate Program for Neuroscience
and will be held
Dec 2, 2011, 9am-1pm
64 Cummington St, Room 150
Boston University, Boston, MA
9:10am. Detection of Moving Objects by Moving Observers: Psychophysics and Modeling
Dept. of Mathematics and Computer Science
College of the Holy Cross
As a person moves through the environment, he or she must be able to detect moving objects in order to intercept of avoid them. For a moving observer, the retinal images of stationary objects move in a pattern, the optic flow field. Thus, one cannot distinguish a self-moving object from stationary objects using image motion alone. In theory one can identify moving objects if their angle or speed of motion differs from the pattern generated by the images of stationary objects. I will present results from psychophysical experiments examining whether people can use this information to identify moving objects. I also present data from a computational model that uses motion differences to locate regions of speed and angle changes to localize the borders of moving objects. Finally, I will present results that show how monocular depth cues interact with motion cues to aid in the detection of moving objects.
10am. Interaction of Visual Landmarks and Path Integration in Human Navigation
William H. Warren and Mintao Zhao
Dept. of Cognitive, Linguistic & Psychological Sciences
Humans and other animals are able to navigate back to a known location using both visual landmarks and path integration. How does such visual and idiothetic information interact in navigation? In the present experiments, participants performed a triangle completion task on foot in a virtual environment. First, we test whether landmarks and path integration are optimally combined, in the Bayesian sense (Cheng, et al, 2007; Shettleworth & Sutton, 2005). We find that this information is integrated, but that landmarks strongly bias the homing direction, consistent with the idea that they reset the path integration system. Second, we test whether path integration serves as a navigation backup system, continuously running in the background (May & Klatzky, 2000). To the contrary, we find that path integration is “dialed down” in the presence of stable landmarks, but quickly “dialed up” when landmarks fail. Third, we ask how the navigator determines whether landmarks are stable. We find that path integration serves as a very low-resolution reference system, but its consistency with local or global landmarks helps to detect landmark instability. Thus, visual landmarks and path integration are not optimally combined, but interact in context-sensitive ways to support accurate navigation.
10:50-11:10 Break and discussion
11:10am. Property-based neural representation of scene and space
Dept. of Brain and Cognitive Sciences, Computer Science and Artificial Intelligence Laboratory
Behavioral and computational studies suggest that visual scene analysis rapidly produces a rich description of both the objects and the spatial layout of a scene. In this talk, I will describe recent findings in cognitive neuroscience showing that visual scene information is represented in a distributed manner across various brain areas.
12pm. Object motion detection by normal and visually impaired moving observers
Finnegan J. Calabro and Lucia M. Vaina
Dept. of Biomedical Engineering, Brain and Vision Research Laboratory
The ability of moving observers to detect object motion is critical to both collision avoidance and interceptive action planning. We first present data from psychophysics and fMRI describing the mechanisms and brain networks mediating object motion detection during simulated self-motion. Psychophysically, we show retinal motion is separated into self- and object-motion components, and neurally is implemented by several interacting brain networks of cortical regions involved in visual motion processing and action-planning. Second, we present results from visually impaired stroke patients who were unable to perform the object motion task. We show that spatially co-localized, congruently moving auditory cues bound to the moving object enhanced detection of the object, both for normal observers and, importantly, the patients.
12:50pm. Wrap up
Bravi seminar: Ill-posed problems in brain imaging: from MEG source estimation to resting state networks and decoding with fMRI.
Monday – August 8, 2011 – 4:00pm
Dr. Alexandre Gramfort PhD
Ill-posed problems in brain imaging: from MEG source estimation to resting state networks and decoding with fMRI.
Martinos Center for Biomedical Imaging,
Harvard – Massachusetts General Hospital,
44 Cummington St. Room 705
If the number of parameters to estimate exceeds the number of measurements, an estimation problem is said to be ill-posed. Due to limited acquisition times, physics of the problems and complexity of the brain, the field of brain imaging needs to address many ill-posed problems. Among such problems are: the localization in space and time of active brain regions with MEG and EEG, the estimation of functional networks from fMRI resting state data and what was is commonly called “decoding”. Decoding consists in predicting from fMRI data a behavorial variable or classifying brain states using supervised learning methods like SVM. In this talk I will describe some recent contributions to all three problems. The concepts shared by the different methods presented are: estimation and statistical learning in high dimension, convex optimization, sparse and structured priors.
Monday – August 8, 2011 – 4:00pm
Dr. Jonas Richiardi
Brain decoding with functional connectivity patterns: a graph embedding approach
Medical Image Processing Laboratory
Ecole Polytechnique Federale de Lausanne
44 Cummington St. Room 401
Whole-brain connectivity information is becoming increasingly popular with neuroscientists and neuroimagers alike, and for good reasons: it provides complementary information to statistical activation maps, and enables fundamental insights into the network organization of the brain in terms of information flow, resilience, efficiency, or modularity. Furthermore, it is now gaining importance for clinical applications.
This talk will focus on an emerging technique for analysing brain networks: connectivity-based decoding. This is an interesting tool for neuroimagers and provides complementary information to both activation-based decoding and qualitative analysis in terms of graph-theoretic properties and graph topology. It is applicable to both brain state decoding and clinical applications such as diagnosis. After a whole-brain regional connectivity graph has been established, the problem can be cast as a weighted graph classification task. We will show that the graphs of interest form a restricted class of graphs whose properties prevent the application of classical graph matching techniques to elicit a useful distance or dissimilarity between graphs, and advocate for the use of modern graph embedding methods. We will present several vector space representations of graphs that are suitable for the class of graphs of interest, and discuss experimental results on cognitive tasks, ageing populations, and clinical populations.
Introduction: Brain Networks
Lucia M. Vaina, M.D. Ph.D. – Boston University, Department of Biomedical Engineering and Massachusetts General Hospital, Harvard medical School, Neurology & Radiology Departments
Resting state functional connectivity: methods, debates and clinical applications
Susan Whitfield-Gabrieli, Ph.D., Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
Brain covariance selection: better individual functional connectivity models using population prior
Alxander Gramfort, Ph.D. Athinoula-Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School.
Probabilistic inference on white-matter pathways using anatomical priors
Anastasia Yendiki, Ph.D. Athinoula-Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School.
Lucia and Finn are just back from presenting a pair of posters at the Vision Science Society meeting in Naples, Fl this week. Check below for abstracts!
Deficit of temporal dynamics of detection of a moving object during egomotion in a stroke patient: a psychophysical and MEG study
Lucia-Maria Vaina, Kunjan Rana, Ferdinando Buonanno, Finnegan Calabro, Matti Hamalainen
To investigate the temporal dynamics underlying object motion detection during egomotion, we used psychophysics and MEG with a motion discrimination task. The display contained nine spheres moving for 1 second, eight moved consistent with forward observer translation, and one (the target) with independent motion within the scene (approaching or receding). Observers?s task was to detect the target. Seven healthy subjects (7HS) and patient PF with an infarct involving the left occipital-temporal cortex participated in both the psychophysical and MEG study. Psychophysical results showed that PF was severely impaired on this task. He was also impaired on the discrimination of radial motion (with even poorer performance on contraction) and 2D direction as well as on detecting motion discontinuity. We used anatomically constrained MEG and dynamic Granger causality to investigate the direction and dynamics of connectivity between the functional areas involved in the object-motion task and compared the results of 7HS and PF. The dynamics of the causal connections among the motion responsive cortical areas (MT, STS, IPS) during the first 200ms of the stimulus was similar in all subjects. However, in the later part of the stimulus (>200 ms) PF did not show significant causal connections among these areas. Also the 7HS had a strong, probably attention modulatory connection, between MPFC and MT, which was completely absent in PF. In PF and the 7HS, analysis of onset latencies revealed two stages of activations: early after motion onset (200-400 ms) bilateral activations in MT, IPS, and STS, followed (>500 ms) by activity in the postcentral sulcus and middle prefrontal cortex (MPFC). We suggest that the interaction of these early and late onset areas is critical to object motion detection during self-motion, and disrupted connections among late onset areas may have contributed to the perceptual deficits of patient PF.
Detection of object motion during self-motion: psychophysics and neuronal substrate
Finnegan Calabro, Lucia-Maria Vaina
The extraction of object motion from a visual scene is critical for planning direct interactions with one?s surroundings, and is of particular interest and difficulty when the observer is moving. To investigate the visual processes underlying object motion detection during self-motion, we presented observers (n=23) with a stimulus containing nine objects, eight of which moved consistent with forward observer translation, and one of which (the target) had independent motion within the scene. Results showed that observers? abilities to detect the target depended significantly on the speed of the object within the scene (Exp 1), but that performance was independent of observer speed, and therefore retinal velocity (Exp 2, n=7). Results were compared to predicted performances for target selection based on relative differences in speed and direction among the objects, and were not consistent with either strategy. Instead, these data suggest that observer performance used a flow parsing mechanism in which self-motion is estimated and subtracted from the flow field. In an event-related fMRI paradigm using the task from Exp 1, we found a distributed pattern of activations of occipito-temporal, posterior parietal and parieto-frontal areas. Granger causality analysis among these activated regions revealed two major highly connected networks. One network involved a set of interconnected early, bilateral, visually responsive areas (including KO, hMT+ and VIPS). We posit that these regions underlie the perception and formation of a visual representation of the stimulus. The second network was comprised of primarily higher-level, left hemisphere areas (including DIPSM, FEF, subcentral sulcus and postcentral gyrus) that have been reported to be involved in the use of sensory inputs for preparing motor commands. We suggest that these networks provide a link between the perceptual representation of the visual stimulus and its interpretation for action.