SE PhD Final Defense: Jingmei Yang

  • Starts: 11:00 am on Friday, March 6, 2026
  • Ends: 2:00 pm on Friday, March 6, 2026

SE PhD Final Defense: Jingmei Yang

TITLE: Adapting AI Models for Healthcare Applications

ADVISOR: Ioannis Paschalidis (ECE, SE, BME)

CHAIR: Andrew Sabelhaus (ME, SE)

COMMITTEE: Nahid Bhadelia Chobanian & Avedisian School of Medicine; Rhoda Au Chobanian & Avedisian School of Medicine; Sandor Vajda BME, SE

ABSTRACT: General-purpose AI models often struggle to maintain performance when applied to the healthcare domain due to mismatches in data distributions, available resources, and task requirements. Domain adaptation bridges this gap by adapting AI models to specific clinical contexts. However, there is no universal approach that fits all constraints. This dissertation investigates how constraints drive strategy selection and how to adapt traditional machine learning, convolutional neural networks, and large language models accordingly to specific medical domain problems. This study starts by exploring domain adaptation for structured clinical data. Models developed using high-resource features often become inapplicable when deployed in low-resource environments due to mismatched feature availability. We develop models tailored to the specific feature sets available at different levels of care. Shifting from structured data to visual modalities, we examine adaptation strategies for vision models in neuropsychological assessments. Models pre-trained on natural images show minimal visual similarity to clinical Trail Making Test drawings. We demonstrate that lightweight transfer learning (fine-tuning only a classification head) effectively adapts general-purpose vision models to this specialized clinical task. When applying natural language processing for biomedical literature screening, where labeled data and computational resources are limited, we show that prompt engineering provides a data-efficient alternative to full training for adapting general language models to domain-specific screening tasks. As our most intensive adaptation approach, we investigate deep domain specialization for language models in epidemiological applications. We introduce PandemIQ Llama, which undergoes continuous pre-training on a large domain-specific corpus before task-specific fine-tuning. This continuous pre-training approach outperforms both prompt engineering and standard fine-tuning methods. In conclusion, aligning adaptation strategies with available data, resources, and clinical context is essential for developing broadly applicable AI tools in healthcare. This dissertation demonstrates how systematically tailoring methods to specific constraints enables effective healthcare AI solutions.

Location:
CDS 1101
Hosting Professor
Ioannis Paschalidis (ECE, SE, BME)