scishow psych
Can This AI Hear Alzheimer’s on the Phone?
YouTube: | https://youtube.com/watch?v=XzLgw2Y8gNw |
Previous: | Your Nose Does More Than You Give It Credit For |
Next: | Why Is It So Hard to Make a Decision? |
Categories
Statistics
View count: | 49,490 |
Likes: | 2,873 |
Comments: | 162 |
Duration: | 06:20 |
Uploaded: | 2021-12-16 |
Last sync: | 2024-12-07 22:45 |
Go to https://ground.news/scishowpsych to download the Ground News app. Get 60% off the Vantage subscription for unlimited access to the most advanced news analysis features. Offer ends 12/31.
It can be tough to diagnose Alzheimer's disease, but a team of researchers believes that artificial intelligence might be able to do it just by listening.
Hosted by: Hank Green
----------
Support SciShow Psych by becoming a patron on Patreon: https://www.patreon.com/SciShowPsych
SciShow is on TikTok! Check us out at https://www.tiktok.com/@scishow
----------
Become a Patron and have your name featured in the description of every SciShow Psych episode! https://www.patreon.com/SciShowPsych
----------
Looking for SciShow elsewhere on the internet?
SciShow Tangents Podcast: http://www.scishowtangents.org
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Instagram: http://instagram.com/thescishow
----------
Sources:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0253988#sec023
https://www.alz.org/alzheimers-dementia/treatments/medications-for-memory
https://www.tandfonline.com/doi/abs/10.1080/02687038.2019.1608502
https://whatis.techtarget.com/definition/validation-set
https://alzheimersproject.org/the-reasons-for-and-risks-of-dementia-diagnosis-delay/
https://www.nhs.uk/conditions/alzheimers-disease/treatment/
Images:
https://www.istockphoto.com/photo/elderly-care-old-and-young-gm897821744-247737830
https://commons.wikimedia.org/wiki/File:Alzheimer%E2%80%99s_Disease,_Spreads_through_the_Brain_(24524716351).jpg
https://www.istockphoto.com/vector/woman-and-woman-speech-bubbles-gm475743102-65891903
https://www.istockphoto.com/vector/chatbot-banner-concept-horizontal-business-banner-template-with-illustration-of-man-gm1073043592-287195906
https://www.istockphoto.com/vector/chat-messages-notification-on-smartphone-vector-illustration-flat-cartoon-sms-gm1026973286-275366142
https://www.istockphoto.com/vector/artificial-intelligence-robot-character-gm1158038886-316196374
https://www.storyblocks.com/video/stock/senior-man-talking-at-park-sklw4k728kaxmci7x
https://www.istockphoto.com/vector/voice-assistant-sound-wave-microphone-voice-control-technology-voice-and-sound-gm1134910506-301735535
https://www.storyblocks.com/video/stock/abstract-white-audio-wave-on-black-background-seamless-loop-abstract-animation-hsdhx2mbmjkg1bnjg
https://www.istockphoto.com/photo/a-microglia-cell-in-the-foreground-it-plays-an-important-role-in-the-pathogenesis-of-gm1155014623-314293862
https://www.istockphoto.com/photo/artificial-intelligence-and-technology-gm1296904409-390190252
https://www.istockphoto.com/photo/mri-brain-scan-of-head-and-skull-with-hand-pointing-gm187272969-27670358
https://www.istockphoto.com/photo/close-up-shot-of-a-modern-silver-wireless-speaker-standing-on-a-table-at-home-with-gm1212406247-351954888
https://www.istockphoto.com/photo/doctor-with-a-tablet-computer-gm898328226-247859228
It can be tough to diagnose Alzheimer's disease, but a team of researchers believes that artificial intelligence might be able to do it just by listening.
Hosted by: Hank Green
----------
Support SciShow Psych by becoming a patron on Patreon: https://www.patreon.com/SciShowPsych
SciShow is on TikTok! Check us out at https://www.tiktok.com/@scishow
----------
Become a Patron and have your name featured in the description of every SciShow Psych episode! https://www.patreon.com/SciShowPsych
----------
Looking for SciShow elsewhere on the internet?
SciShow Tangents Podcast: http://www.scishowtangents.org
Facebook: http://www.facebook.com/scishow
Twitter: http://www.twitter.com/scishow
Instagram: http://instagram.com/thescishow
----------
Sources:
https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0253988#sec023
https://www.alz.org/alzheimers-dementia/treatments/medications-for-memory
https://www.tandfonline.com/doi/abs/10.1080/02687038.2019.1608502
https://whatis.techtarget.com/definition/validation-set
https://alzheimersproject.org/the-reasons-for-and-risks-of-dementia-diagnosis-delay/
https://www.nhs.uk/conditions/alzheimers-disease/treatment/
Images:
https://www.istockphoto.com/photo/elderly-care-old-and-young-gm897821744-247737830
https://commons.wikimedia.org/wiki/File:Alzheimer%E2%80%99s_Disease,_Spreads_through_the_Brain_(24524716351).jpg
https://www.istockphoto.com/vector/woman-and-woman-speech-bubbles-gm475743102-65891903
https://www.istockphoto.com/vector/chatbot-banner-concept-horizontal-business-banner-template-with-illustration-of-man-gm1073043592-287195906
https://www.istockphoto.com/vector/chat-messages-notification-on-smartphone-vector-illustration-flat-cartoon-sms-gm1026973286-275366142
https://www.istockphoto.com/vector/artificial-intelligence-robot-character-gm1158038886-316196374
https://www.storyblocks.com/video/stock/senior-man-talking-at-park-sklw4k728kaxmci7x
https://www.istockphoto.com/vector/voice-assistant-sound-wave-microphone-voice-control-technology-voice-and-sound-gm1134910506-301735535
https://www.storyblocks.com/video/stock/abstract-white-audio-wave-on-black-background-seamless-loop-abstract-animation-hsdhx2mbmjkg1bnjg
https://www.istockphoto.com/photo/a-microglia-cell-in-the-foreground-it-plays-an-important-role-in-the-pathogenesis-of-gm1155014623-314293862
https://www.istockphoto.com/photo/artificial-intelligence-and-technology-gm1296904409-390190252
https://www.istockphoto.com/photo/mri-brain-scan-of-head-and-skull-with-hand-pointing-gm187272969-27670358
https://www.istockphoto.com/photo/close-up-shot-of-a-modern-silver-wireless-speaker-standing-on-a-table-at-home-with-gm1212406247-351954888
https://www.istockphoto.com/photo/doctor-with-a-tablet-computer-gm898328226-247859228
This episode is sponsored by Ground News, a new website and app that lets you compare how major events are being covered.
Head to ground.news/scishowpsych or click the link in the description to download the free app. [♪ INTRO] It can be tough to diagnose Alzheimer's disease. Catching it early can help to stave off more severe symptoms for longer, but that’s even more difficult.
People and their families may not notice the first symptoms, or they may chalk them up to normal aging. Even those that do notice something’s a little off may be too scared of the implications of those changes to bring them up with family or their doctor. And that’s without even getting into other barriers to healthcare access.
This means that those with Alzheimer’s often don’t get treatment or support in the earliest stages, which makes things worse for them in the long run. Treatment may delay the progression of the disease, and the sooner it starts, the better. The thing is, the route to diagnosis might not have to look like this.
What if there was a way to pick up signs of Alzheimer’s early in the course of disease, so we could slow it down right from the start? Well, that’s just what researchers in Japan have set out to do. They believe that using machine learning, we might be able to extract enough information from someone’s speech to identify early signs of Alzheimer’s.
This is because speech is one of the very first things to change in people with early Alzheimer’s. People tend to start speaking more slowly, and pause for thought more often, for example. To test whether a machine learning diagnostic approach might work, the team gathered audio data from both healthy participants and those who had been diagnosed with Alzheimer’s by a doctor.
They collected a total of 1,616 audio files: 1,495 from their 99 control participants, and 151 from their 24 participants with Alzheimer’s. These recordings were collected during a participant chat with an AI computer program that greeted them, then asked them to recount what happened to them the previous day in as much detail as possible in one minute. A whole minute of speech can tell us a lot about the state of someone’s cognitive abilities.
How fluent their speech is, their ability to come up with details, and so on, can be really good indicators of their overall cognitive function. That’s one of the main reasons that speech is used in a lot of current Alzheimer’s diagnostics. Participants did this every weekday for one to two months, giving the researchers a good chunk of data from each participant to work with.
Using specialized software, the researchers then extracted information about participant speech from their recordings. For example, how long they spoke, how often they paused, as well as the intensity, pitch and overall characteristics of the sounds recorded. All of this data was then plugged into three different algorithms, which all used slightly different computational methods to categorize the data they were given.
The researchers fed the algorithms data from 1,308 audio files to train them on the differences between speech from those with and without Alzheimer’s. Then, they used data from the remaining 308 files to validate their predictive capabilities. Once the algorithms were all trained up, the researchers used them to make predictions about which audio files came from participants with Alzheimer’s.
Those predictions were compared to scores from a more traditional method of diagnosing Alzheimer’s via audio: a screening test called the Telephone Interview for Cognitive Status, or TICS. The results showed that one algorithm in particular, the Extreme Gradient Boosting Model, may be better at identifying Alzheimer’s than the more traditionally used TICS. The algorithm rivaled TICS in giving no false negatives, meaning it didn’t miss any cases that were there.
And it did better than TICS in giving no false positives, meaning it didn’t say someone had it when they actually didn’t. In fact, around 16% of participants diagnosed via TICS were incorrectly categorized as having Alzheimer’s. This model was able to categorize all participants correctly, which is a pretty big improvement.
Remember, all the participants in the Alzheimer’s group had been diagnosed by a professional, using various standard criteria, so the researchers could make these comparisons. But even with this performance, the algorithm wasn’t significantly better than TICS, when statistical tests were applied. It’s important to note here that since this is still a pretty new approach, even getting close to significance is a promising result.
We cannot draw firm conclusions about its effectiveness yet, but with some adjustments, this could be a really important technique. That said, since it’s still early days for this technology, there are a ton of limitations to this study. For example, those with Alzheimer’s can begin to rely on particular phrases when they’re struggling to fish out of their memory what they’d actually like to say.
And the fluency of those easy-to-reach-for phrases might be able to fool a machine a little better than a human, who would be more able to spot repetitive phrases or shifts in the subject than this particular method. The sample in this study was also pretty small, and based in Japan, so the caveat of needing to repeat this study with a larger, more diverse set of participants definitely applies. This particular study also used participants who had already been given an Alzheimer’s diagnosis by a human to train their algorithms.
And while that’s great for training, it might or might not mean that this particular approach, at least in its current state, wouldn’t be able to catch Alzheimer’s before a human could. Though, that could be possible eventually. Where it would definitely be useful is catching Alzheimer’s in those with limited access to healthcare support.
With a little more finesse, the researchers hope that they may be able to develop tools that could make Alzheimer’s diagnostics more readily available. That might look like an app on your phone or even your virtual home assistant monitoring speech patterns of elderly people. Recognizing Alzheimer’s from speech using AI could help make diagnoses before more severe symptoms show up.
Even more importantly, it would be a low cost, easily accessible option, which would lower some of the barriers to speedy treatment and support. Which is great news for actually connecting treatments with the people who need them. This channel is all about the human brain and the ways we interact with the world.
How we consume news is also a big part of how we perceive the world around us. If you’re interested in seeing how a single news story is being covered across the political spectrum, you might be interested in checking out Ground News. Ground News is a new website and app that lets you compare how major events are being covered.
It’s a useful tool for curious people who want to cut through media bias and find the issues that matter most. You can see every side of every news story by going to ground.news/scishowpsych, or click the link in the description to download the free app. [♪ OUTRO]
Head to ground.news/scishowpsych or click the link in the description to download the free app. [♪ INTRO] It can be tough to diagnose Alzheimer's disease. Catching it early can help to stave off more severe symptoms for longer, but that’s even more difficult.
People and their families may not notice the first symptoms, or they may chalk them up to normal aging. Even those that do notice something’s a little off may be too scared of the implications of those changes to bring them up with family or their doctor. And that’s without even getting into other barriers to healthcare access.
This means that those with Alzheimer’s often don’t get treatment or support in the earliest stages, which makes things worse for them in the long run. Treatment may delay the progression of the disease, and the sooner it starts, the better. The thing is, the route to diagnosis might not have to look like this.
What if there was a way to pick up signs of Alzheimer’s early in the course of disease, so we could slow it down right from the start? Well, that’s just what researchers in Japan have set out to do. They believe that using machine learning, we might be able to extract enough information from someone’s speech to identify early signs of Alzheimer’s.
This is because speech is one of the very first things to change in people with early Alzheimer’s. People tend to start speaking more slowly, and pause for thought more often, for example. To test whether a machine learning diagnostic approach might work, the team gathered audio data from both healthy participants and those who had been diagnosed with Alzheimer’s by a doctor.
They collected a total of 1,616 audio files: 1,495 from their 99 control participants, and 151 from their 24 participants with Alzheimer’s. These recordings were collected during a participant chat with an AI computer program that greeted them, then asked them to recount what happened to them the previous day in as much detail as possible in one minute. A whole minute of speech can tell us a lot about the state of someone’s cognitive abilities.
How fluent their speech is, their ability to come up with details, and so on, can be really good indicators of their overall cognitive function. That’s one of the main reasons that speech is used in a lot of current Alzheimer’s diagnostics. Participants did this every weekday for one to two months, giving the researchers a good chunk of data from each participant to work with.
Using specialized software, the researchers then extracted information about participant speech from their recordings. For example, how long they spoke, how often they paused, as well as the intensity, pitch and overall characteristics of the sounds recorded. All of this data was then plugged into three different algorithms, which all used slightly different computational methods to categorize the data they were given.
The researchers fed the algorithms data from 1,308 audio files to train them on the differences between speech from those with and without Alzheimer’s. Then, they used data from the remaining 308 files to validate their predictive capabilities. Once the algorithms were all trained up, the researchers used them to make predictions about which audio files came from participants with Alzheimer’s.
Those predictions were compared to scores from a more traditional method of diagnosing Alzheimer’s via audio: a screening test called the Telephone Interview for Cognitive Status, or TICS. The results showed that one algorithm in particular, the Extreme Gradient Boosting Model, may be better at identifying Alzheimer’s than the more traditionally used TICS. The algorithm rivaled TICS in giving no false negatives, meaning it didn’t miss any cases that were there.
And it did better than TICS in giving no false positives, meaning it didn’t say someone had it when they actually didn’t. In fact, around 16% of participants diagnosed via TICS were incorrectly categorized as having Alzheimer’s. This model was able to categorize all participants correctly, which is a pretty big improvement.
Remember, all the participants in the Alzheimer’s group had been diagnosed by a professional, using various standard criteria, so the researchers could make these comparisons. But even with this performance, the algorithm wasn’t significantly better than TICS, when statistical tests were applied. It’s important to note here that since this is still a pretty new approach, even getting close to significance is a promising result.
We cannot draw firm conclusions about its effectiveness yet, but with some adjustments, this could be a really important technique. That said, since it’s still early days for this technology, there are a ton of limitations to this study. For example, those with Alzheimer’s can begin to rely on particular phrases when they’re struggling to fish out of their memory what they’d actually like to say.
And the fluency of those easy-to-reach-for phrases might be able to fool a machine a little better than a human, who would be more able to spot repetitive phrases or shifts in the subject than this particular method. The sample in this study was also pretty small, and based in Japan, so the caveat of needing to repeat this study with a larger, more diverse set of participants definitely applies. This particular study also used participants who had already been given an Alzheimer’s diagnosis by a human to train their algorithms.
And while that’s great for training, it might or might not mean that this particular approach, at least in its current state, wouldn’t be able to catch Alzheimer’s before a human could. Though, that could be possible eventually. Where it would definitely be useful is catching Alzheimer’s in those with limited access to healthcare support.
With a little more finesse, the researchers hope that they may be able to develop tools that could make Alzheimer’s diagnostics more readily available. That might look like an app on your phone or even your virtual home assistant monitoring speech patterns of elderly people. Recognizing Alzheimer’s from speech using AI could help make diagnoses before more severe symptoms show up.
Even more importantly, it would be a low cost, easily accessible option, which would lower some of the barriers to speedy treatment and support. Which is great news for actually connecting treatments with the people who need them. This channel is all about the human brain and the ways we interact with the world.
How we consume news is also a big part of how we perceive the world around us. If you’re interested in seeing how a single news story is being covered across the political spectrum, you might be interested in checking out Ground News. Ground News is a new website and app that lets you compare how major events are being covered.
It’s a useful tool for curious people who want to cut through media bias and find the issues that matter most. You can see every side of every news story by going to ground.news/scishowpsych, or click the link in the description to download the free app. [♪ OUTRO]