Could a Computer Diagnose Alzheimer’s Disease and Dementia?
It takes a lot of time—and money—to diagnose Alzheimer’s disease. After running lengthy in-person neuropsychological exams, clinicians have to transcribe, review, and analyze every response in detail. But researchers at Boston University have developed a new tool that could automate the process and eventually allow it to move online. Their machine learning–powered computational model can detect cognitive impairment from audio recordings of neuropsychological tests—no in-person appointment needed. Their findings were published in Alzheimer’s & Dementia: The Journal of the Alzheimer’s Association.
“This approach brings us one step closer to early intervention,” says Yannis Paschalidis, a coauthor on the paper, CISE faculty affiliate, and a BU College of Engineering Distinguished Professor of Engineering. He says faster and earlier detection of Alzheimer’s could drive larger clinical trials that focus on individuals in early stages of the disease and potentially enable clinical interventions that slow cognitive decline: “It can form the basis of an online tool that could reach everyone and could increase the number of people who get screened early.”
The research team trained their model using audio recordings of neuropsychological interviews from over 1,000 individuals in the Framingham Heart Study, a long-running BU-led project looking at cardiovascular disease and other physiological conditions. Using automated online speech recognition tools—think, “Hey, Google!”—and a machine learning technique called natural language processing that helps computers understand text, they had their program transcribe the interviews, then encode them into numbers. A final model was trained to assess the likelihood and severity of an individual’s cognitive impairment using demographic data, the text encodings, and real diagnoses from neurologists and neuropsychologists.
Paschalidis says the model was not only able to accurately distinguish between healthy individuals and those with dementia, but also detect differences between those with mild cognitive impairment and dementia. And, it turned out, the quality of the recordings and how people spoke—whether their speech breezed along or consistently faltered—were less important than the content of what they were saying.
“It surprised us that speech flow or other audio features are not that critical; you can automatically transcribe interviews reasonably well, and rely on text analysis through AI to assess cognitive impairment,” says Paschalidis, who’s also the new director of BU’s Rafik B. Hariri Institute for Computing and Computational Science & Engineering. Though the team still needs to validate its results against other sources of data, the findings suggest their tool could support clinicians in diagnosing cognitive impairment using audio recordings, including those from virtual or telehealth appointments.
Screening before Symptom Onset
The model also provides insight into what parts of the neuropsychological exam might be more important than others in determining whether an individual has impaired cognition. The researchers’ model splits the exam transcripts into different sections based on the clinical tests performed. They discovered, for instance, that the Boston Naming Test—during which clinicians ask individuals to label a picture using one word—is most informative for an accurate dementia diagnosis. “This might enable clinicians to allocate resources in a way that allows them to do more screening, even before symptom onset,” says Paschalidis.
Early diagnosis of dementia is not only important for patients and their caregivers to be able to create an effective plan for treatment and support, but it’s also crucial for researchers working on therapies to slow and prevent Alzheimer’s disease progression. “Our models can help clinicians assess patients in terms of their chances of cognitive decline,” says Paschalidis, “and then best tailor resources to them by doing further testing on those that have a higher likelihood of dementia.”
Want to Join the Research Effort?
The research team is looking for volunteers to take an online survey and submit an anonymous cognitive test—results will be used to provide personalized cognitive assessments and will also help the team refine their AI model.
Also contributing to this research were Samad Amini (ENG’24), Boran Hao (ENG’19,’24), and Lifu Zhang (CAS’22, ENG’22); Mengting Song, an ENG researcher; Aman Gupta (ENG’21), a BU Center for Information & Systems Engineering research assistant; Cody Karjadi (CAS’17, MET’20) of the Framingham Heart Study; Vijaya B. Kolachalama, a BU School of Medicine assistant professor; and Rhoda Au, a MED professor of anatomy and neurobiology. The work was supported by the National Science Foundation, Department of Energy, Office of Naval Research, National Institutes of Health, the Framingham Heart Study’s National Heart, Lung, and Blood Institute contract, National Institute on Aging, Alzheimer’s Association, Pfizer, Karen Toffler Charitable Trust, American Heart Association, and Boston University.