Previous: Your Nose Does More Than You Give It Credit For
Next: Why Is It So Hard to Make a Decision?



View count:48,840
Last sync:2024-03-16 05:30
Go to to download the Ground News app. Get 60% off the Vantage subscription for unlimited access to the most advanced news analysis features. Offer ends 12/31.

It can be tough to diagnose Alzheimer's disease, but a team of researchers believes that artificial intelligence might be able to do it just by listening.

Hosted by: Hank Green
Support SciShow Psych by becoming a patron on Patreon:

SciShow is on TikTok! Check us out at
Become a Patron and have your name featured in the description of every SciShow Psych episode!
Looking for SciShow elsewhere on the internet?
SciShow Tangents Podcast:

This episode is sponsored by Ground News, a new website and app that lets you compare how major events are being covered.

Head to or click the link in the description to download the free app. [♪ INTRO] It can be tough to diagnose Alzheimer's disease. Catching it early can help to stave off more severe symptoms for longer, but that’s even more difficult.

People and their families may not notice the first symptoms, or they may chalk them up to normal aging. Even those that do notice something’s a little off may be too scared of the implications of those changes to bring them up with family or their doctor. And that’s without even getting into other barriers to healthcare access.

This means that those with Alzheimer’s often don’t get treatment or support in the earliest stages, which makes things worse for them in the long run. Treatment may delay the progression of the disease, and the sooner it starts, the better. The thing is, the route to diagnosis might not have to look like this.

What if there was a way to pick up signs of Alzheimer’s early in the course of disease, so we could slow it down right from the start? Well, that’s just what researchers in Japan have set out to do. They believe that using machine learning, we might be able to extract enough information from someone’s speech to identify early signs of Alzheimer’s.

This is because speech is one of the very first things to change in people with early Alzheimer’s. People tend to start speaking more slowly, and pause for thought more often, for example. To test whether a machine learning diagnostic approach might work, the team gathered audio data from both healthy participants and those who had been diagnosed with Alzheimer’s by a doctor.

They collected a total of 1,616 audio files: 1,495 from their 99 control participants, and 151 from their 24 participants with Alzheimer’s. These recordings were collected during a participant chat with an AI computer program that greeted them, then asked them to recount what happened to them the previous day in as much detail as possible in one minute. A whole minute of speech can tell us a lot about the state of someone’s cognitive abilities.

How fluent their speech is, their ability to come up with details, and so on, can be really good indicators of their overall cognitive function. That’s one of the main reasons that speech is used in a lot of current Alzheimer’s diagnostics. Participants did this every weekday for one to two months, giving the researchers a good chunk of data from each participant to work with.

Using specialized software, the researchers then extracted information about participant speech from their recordings. For example, how long they spoke, how often they paused, as well as the intensity, pitch and overall characteristics of the sounds recorded. All of this data was then plugged into three different algorithms, which all used slightly different computational methods to categorize the data they were given.

The researchers fed the algorithms data from 1,308 audio files to train them on the differences between speech from those with and without Alzheimer’s. Then, they used data from the remaining 308 files to validate their predictive capabilities. Once the algorithms were all trained up, the researchers used them to make predictions about which audio files came from participants with Alzheimer’s.

Those predictions were compared to scores from a more traditional method of diagnosing Alzheimer’s via audio: a screening test called the Telephone Interview for Cognitive Status, or TICS. The results showed that one algorithm in particular, the Extreme Gradient Boosting Model, may be better at identifying Alzheimer’s than the more traditionally used TICS. The algorithm rivaled TICS in giving no false negatives, meaning it didn’t miss any cases that were there.

And it did better than TICS in giving no false positives, meaning it didn’t say someone had it when they actually didn’t. In fact, around 16% of participants diagnosed via TICS were incorrectly categorized as having Alzheimer’s. This model was able to categorize all participants correctly, which is a pretty big improvement.

Remember, all the participants in the Alzheimer’s group had been diagnosed by a professional, using various standard criteria, so the researchers could make these comparisons. But even with this performance, the algorithm wasn’t significantly better than TICS, when statistical tests were applied. It’s important to note here that since this is still a pretty new approach, even getting close to significance is a promising result.

We cannot draw firm conclusions about its effectiveness yet, but with some adjustments, this could be a really important technique. That said, since it’s still early days for this technology, there are a ton of limitations to this study. For example, those with Alzheimer’s can begin to rely on particular phrases when they’re struggling to fish out of their memory what they’d actually like to say.

And the fluency of those easy-to-reach-for phrases might be able to fool a machine a little better than a human, who would be more able to spot repetitive phrases or shifts in the subject than this particular method. The sample in this study was also pretty small, and based in Japan, so the caveat of needing to repeat this study with a larger, more diverse set of participants definitely applies. This particular study also used participants who had already been given an Alzheimer’s diagnosis by a human to train their algorithms.

And while that’s great for training, it might or might not mean that this particular approach, at least in its current state, wouldn’t be able to catch Alzheimer’s before a human could. Though, that could be possible eventually. Where it would definitely be useful is catching Alzheimer’s in those with limited access to healthcare support.

With a little more finesse, the researchers hope that they may be able to develop tools that could make Alzheimer’s diagnostics more readily available. That might look like an app on your phone or even your virtual home assistant monitoring speech patterns of elderly people. Recognizing Alzheimer’s from speech using AI could help make diagnoses before more severe symptoms show up.

Even more importantly, it would be a low cost, easily accessible option, which would lower some of the barriers to speedy treatment and support. Which is great news for actually connecting treatments with the people who need them. This channel is all about the human brain and the ways we interact with the world.

How we consume news is also a big part of how we perceive the world around us. If you’re interested in seeing how a single news story is being covered across the political spectrum, you might be interested in checking out Ground News. Ground News is a new website and app that lets you compare how major events are being covered.

It’s a useful tool for curious people who want to cut through media bias and find the issues that matter most. You can see every side of every news story by going to, or click the link in the description to download the free app. [♪ OUTRO]