Intelligent user interfaces for SSVEP BCI

This project aims to use steady-state visually evoked potentials as input to a brain-computer interface (BCI) in order to control an augmentative and communication (AAC) device. The AAC will take advantage of intelligent user interface principles for adaptive display of speech synthesizer outputs, which can greatly speed up a user’s BCI end selection time. [more]

IUI_model_v2

Context-based communication app model. A basic concept of the GUI is represented on the left, with natural language processing units shown in the bottom right. Full word (FW) and next word (NW) predictions then feed into all available sensors before fusing and modifying prediction sending output to the screen.

Main personnel Sean Lorenz Funding:
Collaborators Jon Brumberg, Frank Guenther CELEST: Augmentative and alternative communication devices using electroencephalography-based brain-computer interfaces

BMI controlled robots with semi-autonomous capabilities

This project seeks to join non-invasive brain-machine interfacing for navigation with goal-directed autonomous behavior on a robotic platform primarily designed for control by able-bodied users. The robotic platform has two modalities of control: navigation and grasping. A hybrid BMI system is proposed for combined control; navigation by BMI alone, and grasping via biologically inspired models to achieve goal-directed, autonomous behavior. [more]

Main personnel Byron Galbraith
Funding:
Collaborators Max Versace, Jon Brumberg, Frank Guenther CELEST: A brain-machine interface for assistive robotic control

EEG brain-machine interface to control a speech synthesizer

In this project, non-invasive neural activity related to motor imagery is mapped into low-dimensional formant frequencies for instantaneous auditory feedback of synthesized of vowel sounds. A goal of this project is to illustrate the feasibility of meaningful auditory feedback (i.e. vowel sounds) as an appropriate feedback mechanism for a speech BCI. This technology will ultimately benefit patients with severe communication impairment, especially for those whom invasive BCIs are not viable options. [more]

SpeechBMI_DecodedTarget&Variance

Figure taken from: Jonathan S. Brumberg, A. Salazar-Gomez, Frank H. Guenther (2012). Controlling a formant synthesizer using a non-invasive brain-machine interface. In 2012 Motor Speech Conference.

Main personnel Jon Brumberg, Andres Salazar Gomez Funding:
Collaborators Frank Guenther, Steve Williams NIH/NIDCD: Investigating output modality for a brain-computer interface for communication 

CELEST: Developing a brain-machine interface for speech communication

Primate electrophysiology and decoding of eye-movement planning

In this project we investigate the properties of electrophysiological signals recorded from chronic extracellular microelectrodes implanted in primate frontal cortex. This project has two major goals: 1) to develop and test new electrode designs and compare them against existing systems, specifically for stability and longevity of extracellular signals and 2) to decode planning and execution of saccades and other eye movements. [more]

SaccadeBMI

Eye movement brain-computer-interface (BMI) block diagram. Currently, the BMI is controlling the location of discrete targets on a computer screen. In the figure we present how the Unlock Project BMI environment will be used to control the navigation on the computer screen.

Main personnel Misha Panko, Scott Brincatt (MIT), Nan Jia Funding:
Collaborators Earl Miller (MIT), Frank Guenther, Jon Brumberg CELEST: Developing new microelectrodes and analysis techniques for use in Brain Machine Interface applications

Decoding movement intent

A key consideration in the development of any BCI is how to give the user the ability to start and stop operating the device. In the case of a speech-synthesizer based BCI, being able to pause the device will be critical to the intelligibility of the output. Perhaps the most intuitive solution is to decode the user’s intent directly, incorporating the classification of [intent to operate vs. rest] in the BCI decoder itself. Along these lines, we have been investigating computational techniques for classifying imagined hand and speech articulator movements from a resting state in real time. [more]

Emily NPL image

Main personnel Emily Stephen Funding:
Collaborators Jon Brumberg, Frank Guenther NIH/NIDCD: Decoding imagined vowel productions using electroencephalography 

NIH/NIDCD: Neural modeling and imaging in speech

Uncovering speech network using ECoG

Most of what is known about the neural processing of language and speech has come from studies involving fMRI, which has good spatial resolution but is limited in temporal resolution.  Because the dynamics and functional connections during speech are difficult to study using fMRI, electrocorticography (ECoG), with its finer temporal resolution, has the potential both to confirm existing theories and to uncover new functional relationships between brain areas. As a start in this direction, we are analyzing functional connectivity in ECoG during out-loud reading. [more]

ECOG and speech networks

Main personnel Emily Stephen Funding:
Collaborators Jon Brumberg, Kyle Lepage, Mark Kramer, Uri Eden, Frank Guenther NIH/NIDCD: Neural modeling and imaging in speech

Modeling cortical networks for SSVEP simulations

Synchrony has been proposed as a substrate for processes as varied as working memory and perceptual binding and massive neural synchrony is the basis for the electroencephalogram (EEG). In this project, we develop a method to determine the existence of certain synchronous states on any given network, including networks with inputs and those with multiple cell types. Computing stable synchrony of large numbers of cells has the potential for dramatically reducing the time needed for simulation by reducing the system’s dimensionality. When applied to a simplified model of visual cortex, we find that both the shape and the frequency of the stimulus affect the spatial and temporal pattern of synchrony which impacts the EEG signal. Further modeling will be able to make predictions relating the spatiotemporal pattern of inputs to the EEG signal for use in brain-machine interfaces. [more]

quotient_network

Main personnel Rob Law Funding:
Collaborators Jon Brumberg, Mike Cohen, Frank Guenther CELEST: Investigating properties of neural synchrony with applications for BMI control

Neural Prosthetics for Speech Restoration

Through a collaboration with Dr. Philip Kennedy (Neural Signals) we have developed a BMI for decoding brain activity recorded from an electrode implanted in the speech motor cortex of a human volunteer with locked-in syndrome into continuously synthesized auditory output of intended speech productions. The BMI translates neurological activity directly related to speech production into continuous low-dimensional (2D) formant frequencies. Formants are directly related to the movements of the vocal tract and thus are comparable with prior methods for prediction of motor kinematics from neural activity. The implant volunteer participated in 25 decoding sessions consisting of 24-40 trials (split into 4 blocks of 6-10 trials each). He was able to operate the BMI with over 70% accuracy by the end of each session and achieved 89% accuracy on the final session. This study establishes the feasibility of direct speech synthesizer control by BMI for communication. [more]

BMI for real-time synthetic speech production

Schematic of the brain-machine-interface for real-time synthetic speech production used in this project.

Figure taken from: Guenther, F. H., Brumberg, J. S., Wright, E. J., Nieto-Castanon, A., Tourville, J. A., Panko, M., Law, R., Sieber, S. A., Bartels, J. L., Andresasen, D. S., Ehirim, P., Mao, H., and Kennedy, P. R. (2009). A wireless brain-machine interface for real-time speech synthesis. PLoS ONE, 4 (12), e8218.