- All Categories
- Featured Events
- Alumni
- Application Deadline
- Arts
- Campus Discourse
- Careers
- BU Central
- Center for the Humanities
- Charity & Volunteering
- Kilachand Center
- Commencement
- Conferences & Workshops
- Diversity & Inclusion
- Examinations
- Food & Beverage
- Global
- Health & Wellbeing
- Keyword Initiative
- Lectures
- LAW Community
- LGBTQIA+
- Meetings
- Orientation
- Other Events
- Religious Services & Activities
- Special Interest to Women
- Sports & Recreation
- Social Events
- Study Abroad
- Weeks of Welcome
- MSE PhD Prospectus Defense: Haoxiang Yu10:00 am
- ECE PhD Thesis Defense: Ruangrawee Kitichotkul12:00 pm
- Urban Inequalities Workshop: Jennifer Candipan12:00 pm
- SE PhD Prospectus Defense: Jingmei Yang1:30 pm
- 2025 Early Childhood Keynote Lecture: Challenges to Child Health & Well-Being: A Global Perspective on Complex Systems5:00 pm
- Challenges to Child Health and Well-Being5:00 pm
- Celebrating Persian Culture5:30 pm
- Dr. Minju Kim's Violin Studio Recital 8:30 pm
SE PhD Prospectus Defense: Jingmei Yang
SE PhD Prospectus Defense: Jingmei Yang
TITLE: Large Language Model Applications for Public Health
ADVISOR: Ioannis Paschalidis (ECE, SE, BME)
COMMITTEE: Diane Joseph-Mccarthy (BME, Chem, MSE), Nahid Bhadelia (Medicine), Sandor Vajda (BME, SE, Chemistry)
ABSTRACT: Large Language Models (LLMs) are powerful general-purpose tools. The biomedical and healthcare domains, however, present unique, high-stakes challenges. General-purpose LLMs lack the deep, specialized knowledge required for these fields. Their responses are often unreliable for complex clinical or research applications. Adaptation is therefore essential to bridge the gap between general capabilities and domain-specific requirements. This dissertation demonstrates that adapting LLMs with domain-specific data and methods is critical for solving high-impact problems in biomedicine and healthcare. This work develops, applies, and evaluates a spectrum of adaptation techniques. We explore three primary strategies that balance computational cost and task specificity: prompt engineering, domain-adaptive continuous pre-training, and task-specific fine-tuning. This research delivers three primary contributions. First, we apply prompt engineering to develop an automated pipeline for biomedical literature review. This system identifies drug targets from large text corpora and achieves performance comparable to human experts for pathogens like SARS-CoV-2 and Nipah. Second, we utilize continuous pre-training to develop PandemIQ Llama, a domain-specialized model. This model was trained on a curated Pandemic Corpus and substantially outperforms baseline models on public health tasks. Third, we use task-specific fine-tuning to design and implement BEACON, an open-access global outbreak surveillance system. This expert-in-the-loop platform is currently deployed and serves over 100 government and multilateral public health organizations and many users across 154 countries. In conclusion, this work establishes that domain adaptation is an effective and necessary strategy for addressing real-world challenges in drug discovery, pandemic intelligence, and global health surveillance.
| When | 1:30 pm - 3:30 pm on 1 December 2025 |
|---|---|
| Building | ENG 245 |