Weekly AIR Seminar 02/05
-
Starts:
1:00 pm on Wednesday, February 5, 2025
-
Ends:
2:00 pm on Wednesday, February 5, 2025
Weekly AIR Seminar 02/05
Chau's Talk:
Title: Enhancing Feature Diversity Boosts Channel-Adaptive Vision Transformers
Abstract: Multi-Channel Imaging (MCI) contains an array of challenges for encoding useful feature representations not present in traditional images. For example, images from two different satellites may both contain RGB channels, but the remaining channels can be different for each imaging source. Thus, MCI models must support a variety of channel configurations at test time. Recent work has extended traditional visual encoders for MCI, such as Vision Transformers (ViT), by supplementing pixel information with an encoding representing the channel configuration. However, these methods treat each channel equally, i.e., they do not consider the unique properties of each channel type, which can result in needless and potentially harmful redundancies in the learned features. For example, if RGB channels are always present, the other channels can focus on extracting information that cannot be captured by the RGB channels. To this end, we propose DiChaViT, which aims to enhance the diversity in the learned features of MCI-ViT models. This is achieved through a novel channel sampling strategy that encourages the selection of more distinct channel sets for training. Additionally, we employ regularization and initialization techniques to increase the likelihood that new information is learned from each channel. Many of our improvements are architecture agnostic and can be incorporated into new architectures as they are developed. Experiments on both satellite and cell microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, report DiChaViT yields a 1.5 - 5.0% gain over the state-of-the-art.
Bio: Chau Pham is a fourth-year Ph.D. student at Boston University, advised by Prof. Bryan Plummer. His research focuses on Machine Learning, particularly in efficient deep learning, computer vision, and vision-language.
Kevin's Talk:
Title: SPARC: Score Prompting and Adaptive Fusion for Zero-Shot Multi-Label Recognition in Vision-Language Models
Abstract: Zero-shot multi-label recognition (MLR) with Vision-Language Models (VLMs) faces significant challenges without training data, model tuning, or architectural modifications. Existing approaches require prompt tuning or architectural adaptations, limiting zero-shot applicability. Our work proposes a novel solution treating VLMs as black boxes, leveraging scores without training data or ground truth. Using large language model insights on object cooccurrence, we introduce compound prompts grounded in realistic object combinations. Analysis of these prompt scores reveals VLM biases and “AND”/“OR” signal ambiguities, notably that maximum compound scores are surprisingly suboptimal compared to second-highest scores. We address these through a debiasing and score-fusion algorithm that corrects image bias and clarifies VLM response behaviors. Our method enhances other zero-shotapproaches, consistently improving their results. Experiments show superior mean Average Precision (mAP) compared to methods requiring training data, achieved through refined object ranking for robust zero-shot MLR.
Bio: Kevin Miller is a 3rd-year Computer Science PhD student working on Computer Vision and Machine Learning with Professor Venkatesh Saligrama. Prior to his PhD, Kevin was an early employee at a medical AI startup that used computer vision to keep track of blood loss during surgeries. Kevin is interested in dealing with limited data and resources and making AI safer, more reliable, and easier to understand.
Back to Calendar