New Research Introduces Privacy-Preserving AI Framework for Detecting Cognitive Impairment from Voice Recordings
By Hariri Institute Staff
A collaborative team of researchers, including Hariri Institute faculty affiliates Vijaya Kolachalama, PhD, FAHA, associate professor of medicine, and Rhoda Au, professor of epidemiology, anatomy & neurobiology, has developed a new computational framework that can detect early signs of cognitive impairment from digital voice recordings—while protecting individual privacy. The findings, published in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association, mark a significant step forward in the ethical application of AI to healthcare diagnostics.
Digital voice recordings offer a rich source of information for detecting cognitive decline. Variations in speech rate, pitch, articulation, and pauses can all signal conditions such as normal cognition (NC), mild cognitive impairment (MCI), or dementia (DE). However, these recordings often include identifiable features such as gender, accent, or emotional tone, raising concerns about privacy and data misuse.
The BU-led research team introduced an innovative solution: a machine learning framework that uses pitch-shifting and other audio transformations—such as time-scale modification and noise addition—to anonymize speech data while retaining key acoustic features essential for cognitive analysis.
Using datasets from the Framingham Heart Study (FHS) and DementiaBank Delaware (DBD), the team tested their approach across varying levels of audio modification. The system successfully differentiated between NC, MCI, and DE in 62% of FHS data and 63% of DBD data, demonstrating the promise of this approach in real-world settings.
This work opens the door to scalable, privacy-conscious voice-based screening tools for early detection of Alzheimer’s disease and related conditions.

According to the researchers, this work contributes to the ethical and practical integration of voice data in medical analyses, emphasizing the importance of protecting patient privacy while maintaining the integrity of cognitive health assessments. “These findings pave the way for developing standardized, privacy-centric guidelines for future applications of voice-based assessments in clinical and research settings,” adds Kolachalama, who also is an associate professor of computer science, affiliate faculty of Hariri Institute for Computing and a founding member of the Faculty of Computing & Data Sciences at Boston University.
These findings appear online in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.
Learn more in this press announcement by Boston University Chobanian & Avedisian School of Medicine.
This project was supported by grants from the National Institute on Aging’s Artificial Intelligence and Technology Collaboratories (P30-AG073104 and P30-AG073105), the American Heart Association (20SFRN35460031), Gates Ventures, and the National Institutes of Health (R01-HL159620, R01-AG062109, and R01-AG083735).
Paper citation: Meysam Ahangaran, Nauman Dawalatabad, Cody Karjadi, James Glass, Rhoda Au, Vijaya B. Kolachalama. Obfuscation via pitch‐shifting for balancing privacy and diagnostic utility in voice‐based cognitive assessment. Alzheimer’s & Dementia, 2025; 21 (3) DOI: 10.1002/alz.70032