NRT Trainee Poster Session 2022

Spencer Byers

Doctoral Candidate, Boston University

PI: Ian Davison

Department: Biology

Poster Title
In-vivo Calcium Imaging of Accessory Olfactory Bulb Mitral Cells During Social Interactions

Poster Presentation

 

Abstract
The vomeronasal system plays a pivotal role in guiding social behavior through the sensing of behaviorally-relevant chemosignals (e.g., pheromones). While responses to social cues have been characterized in the accessory olfactory bulb (AOB), much less is known how such chemosignals are represented during unrestrained interactions that elicit natural social behaviors. Here, using miniscope Ca2+ imaging of AOB activity in freely moving mice, we assess how the AOB encodes sensory information during social investigations of male and female conspecifics from various background strains. First, in constrained interactions, experimental animals are allowed to investigate restricted body regions of various partners, allowing us to image sensory-evoked responses to facial or anogenital regions associated with distinct chemosignals with high spatiotemporal precision. Second, we allow mice to freely investigate and interact with a range of probe animals while recording evoked activity. In both cases, while a subset of glomeruli show strongly selective activation for male and female probe animals, most show mixed responsiveness. We also find selective activation to chemosignals located on different body regions, reflecting the distinct cues secreted at these locations. We analyzed the microstructure of social interactions and found that mice tend to structure their investigations as sweeps from nose to tail over one to two seconds, suggesting there may be preferred sequences of social investigations that could serve to bias incoming sensory information. However, the rapidity of investigation sequences contrasts with the prolonged timescale of AOB neural activation, suggesting that the AOB and vomeronasal system integrate information across extended timescales rather than directly mapping individual sensory events onto discrete behavioral responses. Together, these findings extend our understanding of how the AOB represents behaviorally-relevant chemosignals in support of social behavior.


Daniel Carbonero

Doctoral Candidate, Boston University

PI: John White

Department: Biomedical Engineering

Poster Title
Principal Component Analysis for Neuronal Network Analysis Under Isoflurane Sedation in Mice

Poster Presentation

 

Abstract
Anesthetics are essential to modern medicine, allowing painless performance of medical procedures that would otherwise be unbearable. Characterization of the mechanisms of anesthesia has focused principally on either receptor-level biophysics or gross, cortical (e.g. field potential) activity levels. The impact on regional network activity at the cellular level remains poorly understood, for two reasons. First, it has until recently been very challenging to collect local-network data with cellular resolution. This has been largely solved by the development of in vivo imaging of calcium transients from identified call types. Second, data sets from such experiments are immense and challenging to interpret. Here, we explore using principal component analysis (PCA) to make such data sets manageable and understandable. PCA and related techniques produce a low-dimensional representation of high-dimensional data while maintaining innate variation necessary for feature extraction. Harnessing PCA to extract, highlight, and analyze latent features of anesthetized neuronal networks can give insight regarding how populations of neuron work as an ensemble to determine an animal’s behavioral state. To measure behaviorally salient activity in local cortical networks, we used 2-photon microscopy to image calcium activity of identified neocortical neurons from mice expressing the calcium indicator jGCaMP7f. Layer 2/3 somatosensory cortex was imaged under varying concentrations of anesthesia (0%, 0.7%, and 1.4% isoflurane by volume). Recorded activity was processed to extract neuronal spatial footprints and traces of calcium activity from individual neurons. Recordings from the three states of anesthesia were then stitched together to create one cohesive time series. A PCA model was fit using this extended stitched recording, and the data were reduced to a lower number of dimensions. The PCA model was then used to detect hidden structures of activity in the network under the different levels of anesthesia. Analyzing the activity in the lower dimensional space, activity progressively collapses, becoming more consistent, and distinctly different network behaviors arise as a function of progressively deeper anesthesia. PCA of the data, therefore, suggests a complex structure of activity, with several highly active subnetworks competing to drive the majority of activity during awake conditions. However, while under increasing sedation, the data suggest a shift to a network that is driven by a set of formerly dormant, more cohesive networks.


Guo Chen

Doctoral Candidate, Boston University

PI: Ji-Xin ChengChen Yang

Department: Electrical and Computer Engineering

Poster Title
Tool Box for Fiber-optoacoustic Neuromodulation 

 

Poster Presentation

 

Abstract
Neuromodulation is a rapid growing field in recent years. This technique has been used for treating neurological and psychiatric disease like Parkinson’s disease, depression and epilepsy. Currently there are several common ways to do neuron stimulation including electrode stimulation, opto-genetic stimulation and ultrasound stimulation. Among all stimulation method, ultrasound stimulation has its unique advantage on its high-precision and genetics free features, becoming one of the ideal ways for neuromodulation studies. There are several ultrasound generators published for neuron modulation including Fiber based Optoacoustic Convertor (FOC), Tapered Fiber based Optoacoustic Emitter (TFOE), etc. These neuromodulation tools have been proven to be useful and supper effective for neuron stimulation. Thus, a highly organized neural modulation unit which is portable and compatible to different kinds of ultrasound emitter will be a great tool to do neuron science studies and clinical applications.


Kaitlyn Dorst

Doctoral Candidate, Boston University

PI: Steve Ramirez

Department: Graduate Program for Neuroscience

Poster Title
Visualization and Modulation of Hippocamus-driven Defensive Networks

Poster Presentation

 

Abstract
This research aims to provide a framework for the network interactions that govern differential defensive responses to aversive mnemonic stimuli. Specifically, both the brainwide and behavioral effects of artificially reactivating a particular fearful memory in a given region are largely unknown. Defensive behaviors, which are aberrantly expressed in PTSD and anxiety, include both active avoidance and passive freezing but manifest depending on the brain-state and the environment of the animal. Here, we alter environmental contingencies to test for the capacity of a defined set of hippocampal cells to differentially drive defensive behaviors when optogenetically activated. Our preliminary results show that artificial reactivation of the same set of cells processing fear manifests as anxiogenic responses, or freezing behavior, depending on whether these cells are stimulated in a large open field, or in a small chamber. These results suggest that a subset of cells, upon activation, recruit different neural substrates to generate diverging behavioral outputs. Our current work utilizes immunohistochemical and graph theory analyses to identify candidate regions mediating state-dependent defensive behavioral switches. Together, our work provides insight into the capacity of discrete sets of cells to produce various behavioral responses.


Yuanyuan Gao

Post Doctoral Researcher, Boston University

PI: David Boas

Department: Biomedical Engineering

Poster Title
Image Reconstruction of fNIRS Data with Short separation

Poster Presentation

 

Abstract
GLM and image reconstruction of fNIRS data are usually performed sequentially. Since the physiological process is temporally and spatially dependent, it is preferable to model them simultaneously. Here, we propose an image reconstruction algorithm that performs short separation GLM and image reconstruction simultaneously. We used spatial basis and temporal basis to represent the change of hemoglobin in both spatial and temporal space. We simulated a perturbation change in the motor region of a head model, and reconstructed it using our algorithm. The location of the perturbation is successfully reconstructed. We are collecting experimental data to further validate the algorithm.


Joseph Greene

Doctoral Candidate, Boston University

PI: Lei Tian

Department: Electrical and Computer Engineering

Poster Title
Miniature Binary Diffractive Optics for Extended Miniscope Neuroimaging

Poster Presentation

 

Abstract
Miniaturized fluorescent imaging platforms, or miniscopes, offer the indispensable ability to monitor real time neural behavior in freely behaving animals. However, miniaturized optics typically exhibit sub-par optical properties leading to the introduction of optical aberrations, a limited field of view and a shallow depth of field, which limits the effective imaging volume. Here, we present a generalizable genetic physics-informed search algorithm that optimizes a binary diffractive optical element (DOE) placed at the back focal plane of the miniscope objective lens to extend that imaging depth from 30μm to 80μm in brain tissue. Next, we manufactured the binary DOE through single-step photolithography and integrated it into a modified miniscope architecture. Leveraging this device, we were able to capture extended neural circuits and vasculature in fixed mouse brain samples. To extract neural information corrupted by high noise and background, we post-process the collected data with a designed wavelet transform for feature extraction.


Yueming Li

Doctoral Candidate, Boston University

PI: Ji-Xin Cheng

Department: Mechanical Engineering

Poster Title
Noninvasive Sub-millimeter-precision Brain Stimulation by Optical-driven Focused Ultrasound

Poster Presentation

 

Abstract
Transcranial focused ultrasound (tFUS) prefers low ultrasonic frequency for its high transcranial efficiency, but the spatial resolution is limited to millimeters. Here, we report non-invasive high precision neuromodulation using optical-driven focused ultrasound (OFUS). OFUS is emitted by a soft optoacoustic pad (SOAP). SOAP was fabricated by embedding candle soot nanoparticles in a polydimethylsiloxane spherical surface. SOAP generated an OFUS field that reached a high spatial resolution of ~ 80 µm, which is two orders-of-magnitudes smaller than that of the tFUS. Using OFUS generated via SOAP, we achieved direct and transcranial in vitro stimulation of cortical neurons with single laser pulse excitation and validated successful non-invasive in vivo stimulation by immunofluorescence staining and electromyography recording.


Bingxue Liu

Doctoral Candidate, Boston University

PI: David Boas

Department: Electrical and Computer Engineering

Poster Title
Normalized Field Autocorrelation Function-based Functional Ultrasound Imaging

Poster Presentation

 

Abstract
Functional ultrasound (fUS) imaging is a rapidly advancing and promising technology for imaging cerebral hemodynamics with high spatial and temporal resolution. Conventional power Doppler imaging (PDI) based fUS provides cerebral blood volume (CBV) dominated signals but is also affected by flow speed and hematocrit. Analysis of the temporal correlations of Ultrasound speckle fluctuations has recently been shown to provide quantitative measures of cerebral blood flow velocity (CBFv). In this study, we investigated computationally efficient approaches for analyzing US speckle temporal correlations to discriminate CBFv and CBV fluctuations and investigated their differential sensitivity to arterioles and venules during hemodynamic responses to brain activation. We found that a novel combination of temporal correlation delay times provided even more statistical significance for estimating brain activity. Correlation based analyses of fUS speckle fluctuations during brain activation provides the unique ability to quantify CBV and CBFv changes differentially in arterioles and venules and provides more statistical power than the traditional power Doppler based analyses of brain activation. This methodology will have impact in future fUS studies of brain activation.


Chang Liu

Doctoral Candidate, Boston University

PI: Lei Tian

Department: Electrical and Computer Engineering

Poster Title
DeepVID: A Self-supervised Deep Learning Framework for Two-photon Voltage Imaging Denoising

Poster Presentation

 

Abstract
Abstract: High-speed population-level voltage imaging is suffered from the shot noise limit. We developed a self-supervised deep learning framework for voltage imaging denoising (DeepVID) without the need for any ground-truth high-SNR data.

1. Introduction: Voltage imaging is an evolving tool to continuously image neuronal activities for large number of neurons. Recently, a high-speed low-light two-photon voltage imaging framework was developed, which enabled kilohertz-scanning on population-level neurons in the awake behaving animal [1]. However, with a high frame rate and a large field-of-view (FOV), shot noise dominates pixel-wise measurements and the neuronal signals are difficult to be identified in the single-frame raw measurement. Another issue is that although deep-learning-based methods has exhibited promising results in image denoising [2], the traditional supervised learning is not applicable to this problem as the lack of ground-truth “clean” (high SNR) measurements. To address these issues, we developed a self-supervised deep learning framework for voltage imaging denoising (DeepVID) without the need for any ground-truth data. Inspired by previous self-supervised algorithms [3,4], DeepVID infers the underlying fluorescence signal based on the independent temporal and spatial statistics of the measurement that is attribute to shot noise. DeepVID achieved a 15-fold improvement in SNR when comparing denoised and raw image data.

2. Methods: 
DeepVID combines self-supervised frameworks implemented in DeepInterpolation [3] and Noise2Void [4]. The network was designed to denoise a single frame from each sub-area at a time. It was trained to predict the central frame N0 using an input image time series, consisting of Npre frames before and Npost frames after the central frame, in addition to a degraded central frame with several “blind” pixels. A random set of pixels (pblind) in the central frame were set as blind pixels using a binary mask, whose intensities were replaced by a random value sampled from randomly selected pixels in the frame. The network architecture of DeepVID was based on the DnCNN [2], a fully convolutional network with residual blocks (Figure 1A). This architecture was chosen to better accommodate the 8:1 aspect ratio in the sub-image scanned by each beamlet. The network was constructed with 2D convolution layers (Conv), batch normalization (BN) layers and Parametric Rectified Linear Unit (PReLU) activation layers, with 16 repeated residual blocks in the middle. Each residual block contained two 3×3 Conv layers with BN layers followed, and an PReLU activation layer was appended after the first BN layer. The skip connection was added to link low dimensional and high dimensional features by adding the feature map of the input and the output for each residual block.
The hyperparameters (Npre = Npost = 3, pblind = 10%) were optimized to maintain the temporal dynamic of voltage signal spikes while recovering a high single-frame spatial resolution. The loss function was the mean squared error (i.e. L2 loss) between the original and denoised central frame and was calculated only on the blind pixels. The training was performed using the Adam optimizer with 360 steps per epoch and a batch size of 4. The training stopped after going over all samples in the data set one time to avoid overfitting. The learning rate was initialized at 5×10-6 and reduced to 1×10-6 when the loss on the validation set did not decrease in the past 288,000 samples. Once DeepVID was trained, inference denoising of subsequent image data using the trained model was performed frame-by-frame by feeding each corresponding 7-frame image time series. It can be performed at approximately 200 frames per second on a single Nvidia P100 GPU.

3. Results: 
Figure 1 The framework, the network structure, and example results of DeepVID. We first assessed the frame-to-frame variability in fluorescence signal in the raw data and confirmed that the fluctuation in each pixel is proportional to the square root of the mean fluorescence (Figure 1B), as expected for shot noise limited signals. DeepVID drastically reduced the frame-to-frame variability, resulting in a 15-fold improvement in SNR when comparing denoised and raw image data (SNR: 0.567±0.002, raw; 8.858±0.027, denoised, n = 8,000 pixels) (Figure 1C). By breaking this fundamental noise constraint, the underlying fluorescence signal can be more accurately inferred at individual time points (Figure 1D). The reduction in shot noise fluctuations in denoised traces readily allowed for the identification of potential sensory-evoked and non-evoked spiking events (Figure 1E).

4. References
[1] Platisa, J., Ye, X., Ahrens, A. M., Liu, C., Chen, I. A., Davison, I. G., Tian, L., Pieribone, V. A., & Chen, J. L. (2021). High-Speed Low-Light In Vivo Two-Photon Voltage Imaging of Large Neuronal Populations. BioRxiv, 2021.12.07.471668.
[2] Zhang, K., Zuo, W., Chen, Y., Meng, D., & Zhang, L. (2016). Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Transactions on Image Processing, 26(7), 3142–3155.
[3] Lecoq, J., Oliver, M., Siegle, J. H., Orlova, N., Ledochowitsch, P., & Koch, C. (2021). Removing independent noise in systems neuroscience data using DeepInterpolation. Nature Methods 2021, 1–8.
[4] Krull, A., Buchholz, T.-O., & Jug, F. (2018). Noise2Void – Learning Denoising from Single Noisy Images. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019-June, 2124–2132.


Carolyn Marar

Doctoral Candidate, Boston University

PI: Ji-Xin Cheng

Department: Biomedical Engineering

Poster Title
Wireless Neuromodulation at Submillimeter Precision Via a Microwave Split-ring Resonator

Poster Presentation

 

Abstract
Current wireless neuromodulation techniques have poor spatial resolution or cannot reach the deep brain due to tissue scattering. Microwaves, with wavelengths on the order of millimeters, have high penetration depth and have been shown to reversibly inhibit neuronal activity. Here, we report the use of an implantable split-ring resonator (SRR) to generate a localized microwave field in the deep brain with submillimeter spatial precision. The SRR breaks the microwave diffraction limit and enhances the efficiency of microwave inhibition. With the SRR, microwaves at power densities below the safe exposure limit can inhibit neurons within ~200 µm from the gap site. Here, we demonstrate application of the microwave SRR in an in vivo model of epilepsy. Future work may explore applications in chronic pain and movement disorders.


Amy Monasterio

Doctoral Candidate, Boston University

PI: Ben Scott, Steve Ramirez
Department: Psychological and Brain Sciences

Poster Title
Dynamics of Hippocampal Fos-tagged Cell Ensembles Before and After Learning

Poster Presentation

 

Abstract
Decades of research have shown that memory formation results in strengthened synaptic connections between networks of hippocampal neurons. These distributed connections between cells are believed to partly comprise the physical basis for memories, sometimes referred to as an “engram”. With the application of activity-dependent genetic tools, significant progress has been made in identifying the networks of cells contributing to an engram for individual memories. These tools utilize inducible genetic constructs to selectively tag populations of active cells based on their expression of immediate-early genes, such as Fos, during learning. Studies identifying Fos-tagged cells in dorsal hippocampus revealed that these cells are activated during learning , support stable memory formation, and are both necessary and sufficient for the behavioral expression of memory recall. However, the in vivo mechanisms by which Fos-tagged cell ensembles emerge in learning and interact with local hippocampal networks have yet to be explored. One model suggests that Fos-tagged ensembles are strengthened in a Hebbian manner during learning, and predict increased correlations between Fos-tagged cells. Here, we demonstrate a novel approach to test such a model and describe how Fos-tagged cell ensembles are shaped by learning. Using large-scale two-photon calcium imaging, we simultaneously recorded spontaneous activity in both Fos-tagged and non-tagged CA1 populations before and after fear memory formation. Our preliminary results demonstrate elevated activity rates in Fos-tagged cells, and future analyses will explore differences in correlation structure within Fos-tagged and non-tagged subpopulations after learning. Our ongoing work further explores how CA1 networks are reorganized by learning in these subpopulations and aims to evaluate if Fos-tagged cells constitute a coordinated cell assembly after learning. Ultimately, characterizing the network activity of Fos-tagged cells yields a more complete understanding of how these hippocampal populations contribute to engram formation in health and disease.


Joe O’Brien

Doctoral Candidate, Boston University

PI: David Boas
Department: Electrical and Computer Engineering

Poster Title
NinjaNIRS 2021: Continued Progress Towards Whole Head, High Density fNIRS

Poster Presentation

Abstract
Functional near infrared spectroscopy (fNIRS) technology has become a valuable tool for neuroimaging. Advancements in this technology over the past 20 years have enabled wearable and fiberless imaging systems to be designed but system weight, channel count and portability are still a major concern.[1] In this work, we present a modular, open-source, wearable fNIRS system capable of high density optode arrangements with sufficient portability for use in the everyday world.

The NinjaNIRS 2021 fNIRS imaging system iterates on the progress made with our previous NinjaNIRS 2020 system, utilizing the same control unit components but with improvements to the user interface and optode design.[2] Two new optode modules have replaced the dual source and detector optode used in NinjaNIRS 2020, separating the emitting and detecting components. 
The first of these new optodes is a dual wavelength source emitting at 730 nm and 850 nm and contains all the LED driving circuitry for the system. The second of these optodes is a detector containing a PIN photodiode and 18-bit ADC to digitize the signal. Separating the emitting and detecting elements leads to a reduction in size compared to the dual optode design, enabling a minimum source-detector separation of 12 mm. Using the current control unit design, the system is scalable up to 8 sources and 12 detectors with the potential for 32 sources and 108 detectors to be controllable with a single unit.

These new optodes are packaged in a two-stage encapsulation consisting of an epoxy potting compound to protect the sensitive electronics and a non-toxic, two-part silicone rubber that isolates the heat generated. The silicone rubber also provides an ergonomic improvement over the 3D printed nylon enclosure used in our dual optode design. In the future we plan to integrate these optodes with EEG and preliminary tests have shown minimal interference between the EEG recording and the source and detectors is minimal and can easily be filtered from the signal.

We assessed the optical performance of the optodes by measuring the emitted power with a laser power meter, and by passing the light between a source and a detector through two stacked neutral density filters. The filters were swapped with other filters of varying optical densities by two motorized filter wheels to determine the system noise equivalent power (NEP). This testing showed an NEP of unencapsulated detector optodes of 116 fW/√(Hz) at 730 nm and optical power of 5.5 ± 0.5 mW at 730 nm and 11.5 ± 0.5 mW at 850 nm for finished sources.

Human studies with the system are ongoing and have yielded promising results, showing sufficient sensitivity to measure hemodynamic responses of subjects in both the frontal and motor regions of the scalp through hair. Additionally, a source only optode is being developed for use with Liveamp EEG electrodes to allow for concurrent EEG-fNIRS measurements with a source and electrode mounted in the same location. Further refinement is still required however, for the system to be used in subjects that have especially dense, dark brown or black hair as imaging through this hair type leads to significant pruning of available measurement channels.

The move to the new single optode with our NinjaNIRS 2021 system has enabled higher density and improved customizability of probe design than was previously possible with our devices. With the reduced optode separation enabled, we are now seeking to enable the control of larger numbers of optodes with a single system to allow for high density, whole head fNIRS measurements to be taken while still maintaining the portability required for functional imaging in real world environments outside of the lab.
1. Zhao, H., & Cooper, R. J. (2017). Review of recent progress toward a fiberless, whole-scalp diffuse optical tomography system. Neurophotonics, 5(01), 1. https://doi.org/10.1117/1.nph.5.1.011012
2. A. von Lühmann, B. B. Zimmermann, A. Ortega-Martinez, N. Perkins, M. A. Yücel, and D. A. Boas, “Towards Neuroscience in the Everyday World: Progress in wearable fNIRS instrumentation and applications,” in Biophotonics Congress: Biomedical Optics 2020 (Translational, Microscopy, OCT, OTS, BRAIN), OSA Technical Digest (Optical Society of America, 2020), paper BM3C.2.


Rhushikesh Phadke

Doctoral Candidate, Boston University

PI:  Alberto Cruz Martin

Department: Biology

Poster Title
Highly Unstable Heterogenous Representations in VIP Interneurons of the Anterior Cingulate Cortex

Poster Presentation

 

Abstract
A hallmark of the anterior cingulate cortex (ACC) is its functional heterogeneity. Functional and imaging studies revealed its importance in the encoding of anxiety-related and social stimuli, but it is unknown how microcircuits within the ACC encode these distinct stimuli. One type of inhibitory interneuron, which is positive for vasoactive intestinal peptide (VIP), is known to modulate the activity of pyramidal cells in local microcircuits, but it is unknown whether VIP cells in the ACC (VIPACC) are engaged by particular contexts or stimuli. Additionally, recent studies demonstrated that neuronal representations in other cortical areas can change over time at the level of the individual neuron. However, it is not known whether stimulus representations in the ACC remain stable over time. Using in vivo Ca2+ imaging and miniscopes in freely behaving mice to monitor neuronal activity with cellular resolution, we identified individual VIPACC that preferentially activated to distinct stimuli across diverse tasks. Importantly, although the population-level activity of the VIPACC remained stable across trials, the stimulus-selectivity of individual interneurons changed rapidly. These findings demonstrate marked functional heterogeneity and instability within interneuron populations in the ACC. This work contributes to our understanding of how the cortex encodes information across diverse contexts and provides insight into the complexity of neural processes involved in anxiety and social behavior.


Naomi Shvedov

Doctoral Student, Boston University

PI: Ben Scott

Department: Graduate Program for Neuroscience

Poster Title
In-vivo Imaging in Genetically Modified Songbirds Reveals Dynamics of Neuron Migration in the Adult Brain

Poster Presentation

 

Abstract
Adult neurogenesis, the addition of new neurons to the mature brain, involves three main stages: cell birth, migration, and integration. Significant progress has been made identifying mechanisms that give rise to the birth and integration of new neurons. However, much less is known about the migratory mechanisms that allow for the dispersion of new neurons throughout the adult forebrain.

Songbirds offer a unique opportunity to study the mechanisms of migration in the adult brain. Neuronal migration is widespread throughout the songbird forebrain, where both excitatory projection neurons and several classes of interneurons are added to circuits that control learning. These adult-born neurons migrate through superficial brain regions, allowing for optical access. Thus, songbirds serve as a useful model to identify the mechanisms that regulate adult neurogenesis in-vivo.

Here we combine genetic labeling with optical imaging to study the migration of new neurons in the songbird forebrain. First, we demonstrate that in previously developed Ubiquitin-C-GFP transgenic zebra finches (Agate et al. 2009), green fluorescent protein (GFP) is highly expressed within the lateral ventricle neurogenic zone, as well as in cells that express markers for migratory and mature neuron phenotypes. Next, we show that this labeling is sparse, which facilitates identification and tracking in-vivo. Finally, using two-photon, volumetric time-lapse microscopy we are able to resolve and follow hundreds of mature neurons and migratory neuroblasts in-vivo in the intact brain.

Using this approach we quantified the trajectories of migratory neuroblasts (n=331) over hours across several brain regions. Cells migrated at a rate of 16 microns/hr (on average), exhibited saltatory nucleokinesis and nonlinear trajectories (mean tortuosity = 1.74). This behavior is consistent with a unique form of “wandering” migration previously observed in the song nucleus HVC (Scott et al. 2012). In the present work, we observe and characterize wandering migration across several regions including HVC, the hippocampal-parahippocampal region, and the female nidopallium. These results imply that the phenomenon of wandering neuronal migration is more widespread than previously thought and demonstrate the utility of transgenic songbirds for the study of adult neurogenesis.


Sandya Subramanian

Undergraduate Researcher, Boston University

PI: Michael Economo

Department: Biomedical Engineering

Poster Title
Visualizing the Transcriptomic Identities of Brainstem Cells in Circuits

Poster Presentation

 

Abstract
The brain contains many different cell types which can be distinguished by the unique sets of genes that they express. These cell types connect to form the circuits that are responsible for diverse behaviors. In the brainstem, these circuits carry out numerous essential physiological functions, from maintaining heart rate to coordinating movement of head and neck muscles. However, combining circuit mapping and transcriptomic approaches to determine which cell types form these critical circuits has remained challenging. Here, we combine anterograde transsynaptic viral tracing using AAV1-Cre with a novel approach for highly-multiplexed fluorescence in situ hybridization (mFISH) to determine the transcriptomic identities of cells in the brainstem that participate in motor circuits. Leveraging single-cell RNA-Seq data from the brainstem, we identified “marker genes” that can be used to classify unique cell types and designed mFISH probes to target these transcripts across multiple rounds of staining in intact tissue. We then used these probe sets to visualize the genes expressed by brainstem neurons that were anterogradely labeled following an injection of AAV1-Cre into the motor cortex. Using this combinatorial methodology, we are not only able to construct a first-of-its-kind atlas of the distinct cell types found across the brainstem, but we can also determine the transcriptomic identity of cells that contribute to circuits essential for brainstem-associated behaviors.


Stephen Tucker

Research Fellow, Boston University

PI: David Boas

Department: Biomedical Engineering

Poster Title
Real-time Lateral Motion Correction for Two Photon Microscopy Using Optical Coherence Tomography

Abstract
Laser scanning two photon microscopy is an unparalleled platform for the study of functional dynamics in the living tissues of awake and behaving animals, but bulk motion inherent to these applications results in distorted images. Post hoc methods for the registration of raster-scanned images rarely improve the resolution of highly localized events, as they do not restore information that is simply lost when an unintended volume is imaged: active adjustments of the focus relative to the sample in real-time is needed. Here, a spectral domain optical coherence tomography (SD-OCT) system serves as an optical flow sensor which rapidly estimates a sample’s lateral displacement while the primary imaging system, a Bessel beam two-photon microscope, operates normally. The displacement estimates are digitally filtered and used to drive the primary imaging system’s galvanometric scanning mirrors to compensate for displacements in real-time, resulting in a stabilized image. The latency and bandwidth of the correction system are evaluated using a tissue phantom, and in vivo motion correction is demonstrated in awake head-fixed mice.


Yao Wang

Doctoral Candidate, Northeastern University

PI: Samuel Chung

Department: Biological Engineering

Poster Title
Targeted Illumination in Widefield Microscopy to Enhance Neuronal Fiber Contrast

Poster Presentation

Abstract
Widefield fluorescence imaging has significant challenges in visualizing neuronal fibers near cell bodies. Specifically, out-of-focus and scattered light from the bright cell body often obscures nearby dim fibers and degrades their signal-to-background ratio. Scanning techniques, such as confocal and two-photon microscopy, can solve this problem but are limited by reduced imaging speed and increased cost, making them less accessible than widefield microscopes. We greatly reduce stray light by modulating the illumination intensity to different structures, under a strategy similar to Active Illumination Microscopy. We insert a simple spatial light modulator to a common widefield microscope and use real-time image processing to pattern our illumination. With the hardware and software setup, we illuminate bright cell bodies with minimal light intensity, and illuminate fiber-like structures with high light intensity to highlight those weak signals. Illuminating bright structures with dim light reduces the scattering surrounding them, exposing dim nearby fibers. Moreover, in this targeted illumination setup, we primarily illuminate the in-focus portion of the neuronal fibers. Thus, we minimize the background and enhance the visibility of fibers in the final image. This targeted illumination significantly improves fiber contrast while maintaining a fast imaging speed and low cost. Using a targeted illumination setup in a widefield microscope, we demonstrate confocal quality imaging of complex neurons in the nematode C. elegans.


Brandon Williams

Doctoral Candidate, Boston University

PI: John White

Department: Biomedical Engineering

Poster Title
Fast Spiking Interneurons Generate Gamma Oscillations in the Medial Entorhinal Cortex in the Absence of Excitatory Cell Input

Poster Presentation

 

Abstract
Many cells in layer II of the medial entorhinal cortex (mEC) exhibit spatially tuned firing rates that generate a grid-like (‘grid cells’) pattern when traversing an open field. Grid cell firing rates are modulated by a theta (4-12 Hz) frequency network-wide oscillation. Further, higher frequency gamma (40-140 Hz) oscillations are nested within the slower-wave theta oscillation and are believed to help synchronize grid cell spike output (Hafting et al. 2008 Nature 453:1248-52; Reifenstein et al. 2012 PNAS 109:6301-6306). Two different (but not mutually exclusive) mechanisms have been proposed to generate gamma oscillations. In the pyramidal-interneuron network gamma (PING) model, excitation of pyramidal cells activates local interneurons which provide inhibitory feedback to the pyramidal cells. Alternatively, the interneuron network gamma (ING) model proposes that inhibition between interneurons can lead to synchronous inhibition. To address the potential network mechanisms of theta-nested gamma, we used intracellular recordings in slices to measure excitatory and inhibitory post synaptic currents in stellate, pyramidal, and fast spiking cells during optogenetic stimulation. No differences were observed in the gamma frequency, theta phase or maximum power of the inhibition-driven gamma oscillations among the main cell types in the mEC. Excitation-driven gamma oscillations were observed in fast spiking interneurons, but not excitatory cells, consistent with the low degree of recurrent excitatory connections (Fuchs et al. 2016 Neuron 89:194-208). Importantly, inhibition-driven gamma oscillations persisted after AMPA/kainate receptors were pharmacologically blocked, contrary to a similar study (Pastoll et al. 2013 Neuron 77:141-154), but consistent with local field potential recordings in mEC (Butler et al. 2018 Eur J Neurosci. 48:2795-2806). In support of the ING model, preliminary data indicates that inhibition-driven gamma oscillations can be observed when optogenetically activating only parvalbumin (PV+) interneurons. These results suggest that a fast-firing inhibitory network is sufficient for generating gamma oscillations in the mEC. Therefore, both PING and ING mechanisms likely contribute to theta-nested gamma oscillations in the mEC.


Yujia Xue

Doctoral Candidate, Boston University

PI: Lei Tian

Department: Electrical and Computer Engineering

Poster Title
Single-shot Volumetric Fluorescence Imaging with a Computational Miniature Mesoscope

Poster Presentation

 

Abstract
Fluorescence imaging is indispensable to neuroscience. The need for large-scale imaging in freely behaving animals has further driven the development in miniaturized microscopes. However, existing microscopes and miniscopes are constrained by the limited space-bandwidth product, shallow depth-of-field, and inability to resolve 3D distributed emitters. Here, we present a Computational Miniature Mesoscope that enables single-shot 3D imaging across an 8 × 7 mm2 field-of-view and 2.5-mm depth-of-field in a clear volume, achieving 7-μm lateral and 200-μm axial resolution. Its expanded imaging capability is enabled by computational imaging. We experimentally validate and quantify the mesoscopic 3D imaging capability on volumetrically distributed fluorescent beads and fibers with scattering and background fluorescence.