CISE Faculty Recognized for Groundbreaking AI Research with AIRR Awards

Recognized for their cutting-edge research in artificial intelligence, 11 CISE faculty affiliates have been honored with AI Research Resource (AIRR) awards. This Hariri Institute program supports AI research at Boston University by giving researchers access to the New England Research Cloud (NERC), a regional computing infrastructure that provides cloud-based resources tailored to academic research. A collaboration between Harvard University and Boston University, NERC offers scalable computing power, data storage, and AI tools to support various research fields, from biomedical sciences to social analytics.

These projects span a wide array of disciplines, from cloud computing and cybersecurity to medical research, demonstrating the advancements of CISE faculty in AI-driven research.

Learn more about the projects:

Advancing Cloud Operations Through AI

The project “AI for Automating Cloud Operations,” led by CISE Director and professor Ayse Coskun (ECE, SE), tackles key challenges in cloud operations through advanced AI techniques. By focusing on vulnerability detection and anomaly management, the research team, which also includes CISE faculty affiliates associate professors Gianluca Stringhini (ECE), Manuel Egele (ECE), and Brian Kulis (ECE, CS, SE), aims to enhance the efficiency and security of cloud-based systems. Their work holds immense potential for optimizing cloud infrastructure and improving data security.

Enhancing Sustainability and Efficiency in Computing

Another project led by Coskun, “AI for Improving Efficiency and Sustainability of Computing,” focuses on increasing the energy efficiency of high-performance computing (HPC) systems and data centers. Collaborating again with Stringhini, Egele, and Kulis, as well as with CISE faculty affiliates professor Ajay Joshi (ECE) and associate professor Emiliano Dall’Anese (ECE, SE), the team is working on making large-scale computing systems, like data centers, more energy-efficient and environmentally friendly. They’re using AI to manage computing resources better, automatically detect performance issues, and adjust power usage as needed.

“Data centers require a lot of electricity to process the data, and they have these agreements with power supply companies like Eversource and National Grid, the companies and the data centers have their own arrangements,” Joshi said. “Data centers don’t require the same amount of electricity or energy all the time, so if there is some way for data centers to talk to each other and then track the demand of these data centers, then it can lead to a better solution, both from the data center perspective as well as the power provider perspective.” 

Data centers, which power everything from streaming services to cloud storage, use a massive amount of electricity. The team is developing innovative ways for these centers to share their power needs without exposing sensitive information. By applying AI-driven strategies, they’re trying to cut energy waste and lower carbon emissions while keeping computers running at peak performance. Testing these solutions on real systems will help bring them into widespread use.

“There’s AI for sustainability, where we can come up with sophisticated AI algorithms to predict broadly how we are going to use our resources and how we can get the most bang for our buck,” Joshi said. “The other way around is sustainable AI. Now they’re investing a lot of money in setting up more data centers, and more data centers means more carbon monoxide being emitted to the environment.”

Democratizing Multimodal Models

Professor Venkatesh Saligrama (ECE, SE) is leading the project “Democratizing Research on Multimodal Foundation Models,” which aims to expand access to cutting-edge AI research. By addressing key issues such as fairness and efficiency in foundation models, the project ensures that researchers with limited computational resources can contribute meaningfully to this rapidly evolving field. This initiative fosters inclusivity and broadens the scope of AI advancements beyond well-funded institutions. 

Understanding AI Through Human Development

Saligrama is also leading “BabyGPT: Developmentally Plausible Learning of Multimodal Foundation Models,” an innovative project that uses ego-centric video data from young children to explore humanlike learning mechanisms. By investigating how language facilitates vision-related tasks, this research contributes to cognitive science and artificial intelligence, providing insights into efficient and developmentally plausible AI systems.

AI-Driven Advancements in Biomedicine

Professor Sandor Vajda’s (BME, SE, CHEM) project focuses on using large language models (LLMs) to improve the prediction of binding affinity and classification in biomedical applications. This project aims to enhance the accuracy of structural models for antibody-antigen and MHC-peptide complexes, providing critical advancements in drug discovery and immunotherapy research.

Multi-Task AI 

Associate Professor Wenchao Li (ECE, SE) is leading two critical projects. The first, “Multi-Task Decision Transformers,” explores how reinforcement learning can help AI better generalize across multiple decision-making tasks. 

AI Decision-Making

Li’s second project, “Uncertainty Quantification for Vision-Language Models,” develops techniques to improve the reliability of AI-driven medical applications, particularly in generating automatic radiology reports. These projects highlight AI’s growing role in decision-making and diagnostic applications.

Strengthening Cybersecurity with AI-Driven Graph Models

Professor David Starobinski’s (ECE, SE, CS) project, “Large Graph Models for Cybersecurity,” uses advanced AI techniques to improve cybersecurity by automating key tasks like spotting vulnerabilities and fixing errors in threat databases. 

Starobinski explained that “software engineers or security analysts work on large-scale software projects, and they use software developed by third parties. The goal is to figure out whether this third-party software is secure or could be exploited for cyber security breaches, and even if it’s currently secure, whether in the future, someone could exploit it.” 

One of the main tools for this is a “knowledge graph,” which maps out connections between different software programs and known security risks, helping experts predict and prevent future threats. Starobinski’s focus is on identifying weaknesses in third-party software. 

AI for Social Media Content Moderation

Gianluca Stringhini is also involved in a second project, “Developing an AI-based Content Moderation Pipeline for Social Media.” This project researches how advanced AI techniques can be utilized to make social media platforms safer by detecting harmful content like hate speech, misinformation, and scams. The system improves content moderation online by combining language models, image analysis, and ranking algorithms.

He explained, “We’ve been developing tools to automatically identify things like scams, hate speech, or false content, using a set of techniques including computer vision, information retrieval, traditional machine learning, and supervised learning. More recently, we started investigating whether AI models, large language models, or visual transformers can help.”  

Stringhini is developing AI-powered tools beyond traditional moderation methods, which often lack context or struggle with complex content. His team’s approach aims to be more accurate and adaptable, making moderation fairer and more effective. 

Revolutionizing Neuropathology with AI

Associate professor Lei Tian’s (ECE, BME) new research project, “Developing a Neuropathology Foundation Model for Advancing Analysis of Neurodegenerative Diseases”, utilizes the massive database of digital brain scans from BU’s Alzheimer’s Disease Research Center. The team on this project is building NeuroPath, the first AI model designed specifically for neuropathology. This advanced tool will help scientists better understand neurodegenerative diseases, leading to earlier diagnoses and improved treatments.

Funded through the AIRR awards, these projects are making significant strides in AI innovation. From strengthening cloud security to expanding AI accessibility and improving medical diagnostics, CISE faculty members are pushing the limits of what AI can do. Their work reinforces BU’s status as a leader in artificial intelligence and promises real-world benefits across multiple fields, shaping a safer and more efficient future.

Read more about the Hariri Institute’s BU-AIRR Award Recipients here.