healthcare triage
Diagnoses, Predictive Values, and Whether You're Sick or Not: NPV and PPV
YouTube: | https://youtube.com/watch?v=dHj7ygeqelw |
Previous: | Don't Panic! Train Travel and Laundry Pods Are Unlikely to Kill You. Statistically. |
Next: | Healthcare Triage LIVE - 2015-06-03 |
Categories
Statistics
View count: | 50,006 |
Likes: | 1,241 |
Comments: | 79 |
Duration: | 08:00 |
Uploaded: | 2015-06-01 |
Last sync: | 2024-12-24 11:45 |
Housekeeping first:
1) You can support Healthcare Triage on Patreon: http://vid.io/xqXr Every little bit helps make the show better!
2) Check out our Facebook page: http://goo.gl/LnOq5z
We've covered test characteristics like sensitivity and specificity before, and we've even covered how you can use those values to make decisions about what results mean about your health. But sensitivity and specificity don't have a meaning that people usually understand.
What people want to know are other values, what we might call the positive predictive and negative predictive values. But there are problems with those metrics. They're the topic of this week's Healthcare Triage.
For those of you who want to read more, go here: http://theincidentaleconomist.com/wordpress/?p=63074
John Green -- Executive Producer
Stan Muller -- Director, Producer
Aaron Carroll -- Writer
Mark Olsen -- Graphics
http://www.twitter.com/aaronecarroll
http://www.twitter.com/crashcoursestan
http://www.twitter.com/johngreen
http://www.twitter.com/olsenvideo
1) You can support Healthcare Triage on Patreon: http://vid.io/xqXr Every little bit helps make the show better!
2) Check out our Facebook page: http://goo.gl/LnOq5z
We've covered test characteristics like sensitivity and specificity before, and we've even covered how you can use those values to make decisions about what results mean about your health. But sensitivity and specificity don't have a meaning that people usually understand.
What people want to know are other values, what we might call the positive predictive and negative predictive values. But there are problems with those metrics. They're the topic of this week's Healthcare Triage.
For those of you who want to read more, go here: http://theincidentaleconomist.com/wordpress/?p=63074
John Green -- Executive Producer
Stan Muller -- Director, Producer
Aaron Carroll -- Writer
Mark Olsen -- Graphics
http://www.twitter.com/aaronecarroll
http://www.twitter.com/crashcoursestan
http://www.twitter.com/johngreen
http://www.twitter.com/olsenvideo
We've covered test characteristics like sensitivity and specificity before, and we've even covered how you can use those values to make decisions about what results mean for your health.
But sensitivity and specificity are difficult for many people to understand. What people want to know are other values, what we might call the positive predictive and negative predictive value. But there are problems with those metrics, and they're the topic of this week's Healthcare Triage.
(intro plays)
In case you haven't watched the Sensitivity and Specificity episode or you've forgotten some of the details, here's the TLDR version. Sensitivity refers to the percentage of people who have a disease who will have a positive test. Specificity refers to the percentage of people who don't have a disease who will test negative.
That's not what people usually want to know. They want to know if they have a positive result, what percentage of people will have a disease. Or they want to know if they have a negative result, what percentage of people will be healthy. They want to know what those tests mean. To talk about this, let's go back to the box we used from our previous episode.
If we take people and we group them by whether or not they have a disease and whether or not they have a positive test, everyone falls into one of four boxes, A through D. Sensitivity is A divided by A plus C, times 100. Again, it's the proportion of people who have a disease and test positive, A, relative to all people who have the disease, A plus C. Specificity is D over D plus B times 100. It's the proportion of people who are healthy and test negative, or D, relative to all people who [don't] have the disease, B plus D.
But here in the real world, what happens is that you get a positive result in a test and then you panic. You want to know what's the chance you're sick. Looking at the box, what you want to know is that out of all people with a positive test, A plus B, what's the chance you're sick, or A. This is the positive predictive value. It's A over A plus B, times 100. Or, back in the real world again, you get a negative test, and you want to know if you're in the clear. You want to know that out of all people with a negative test, or C plus D, what's the chance that you're healthy, or D. This is the negative predictive value. It's D over C plus D, times 100.
Now if these values are what people want, why didn't I do an episode on them back when I covered sensitivity, specificity, and Bayes' Theorem, you might ask. The reason is that in many, many cases, positive predictive value and negative predictive value are flawed. They're not great, because they're very heavily dependent on prevalence.
This means that their relative values can be swayed a lot by how common the disease is in the population being tested. I want to explain this to you in a few ways. Let's start with a scam.
Let's say I have a new test for fakitus, which is our disease of choice here on Healthcare Triage. But in reality, my testing device is a coin flip. If it comes up heads, I tell you you're positive, and if it comes up tails, I tell you you're negative.
Let's say I run a study in population where fakitus is common, or 90% of people have it, and the study involves 1000 people. Here's the box. Since the prevalence of fakitus is 90%, 900 of the 1000 people have the disease, 100 do not. And since it's a coin flip, half of each group has a positive test and half has a negative test. Sensitivity is 450 over 450 plus 450, times 100, or 50%. Specificity is 50 over 50 plus 50, is also 50%. Sensitivity and specificity have made it clear that this is a scam. They know the test sucks.
But positive predictive value would be 450 over 450 plus 450, times one hundred, or 90%. That sounds great. If I told you that, you'd think that this is a great test. And let's say that you bought the test for your office, and started using it.
But in the real world, fakitus is much more rare. Let's say that only 10% of people have it, and if you saw a thousand patients, your box would look like this. Prevalence is 10%, 100 of the 1000 people have the disease, 900 do not. And since it's a coin flip still, half of each group has a positive test and half has a negative test. But now sensitivity is 50 over 50 plus 50, times 100, still 50%. Specificity is 450 over 450 plus 450, times 100, still 50%.
Sensitivity and specificity are isolated from prevalence. They're the same in pretty much every population you look at, and they're consistent. That's why we value them so much. They tell us a lot about the test.
But in this population, positive predictive value is 50 over 50 plus 450, times 100, or 10%. That's not nearly as good as the 90% we saw in the study. Positive predictive value and negative predictive value are heavily dependent on prevalence.
Positive predictive value and negative predictive value are only useful if the prevalence of the disease in question is the exact same in the study population as in the population you treat. But the populations we study are very different than the populations we treat.
If the people studied when a test is developed are sick, then the positive predictive value and negative predictive value only have meaning if we use the test on populations who are sick. This is the mathematical reason why tests shouldn't be ordered outside of those times.
We know the positive predictive value of a white blood cell test when you're sick. In other words, we know the percentage of people who have an abnormal test who are sick in that population. The positive predictive value of a white blood cell test, or the meaning of a positive result, when you're well would be totally different. That's why you shouldn't order a test unless you know for sure how it should be interpreted in the population that it's meant for. Otherwise the results, positive and negative, are sort of meaningless.
Positive predictive value and negative predictive value are also wildly different than we might think, even in the real world. Let's go back to the same example I used in the Sensitivity and Specificity video. It's a study of mammograms for the general population.
The study involved 141,284 women. Of them, 728 were disease positive. The test had a sensitivity of 83.8% and a specificity of 90.6%, which is decent. This gets you this box.
For screening mammograms in the general population, the positive predictive value, therefore, is 610 over 610 plus 13,212, times 100, or 4.4% In other words, of those who have a positive mammogram, about 4% actually have breast cancer. The negative predictive value is 127,344 divided by 118 plus 127,344, times 100, or 99.9%. That means if you have a negative mammogram, you're almost positively free of breast cancer.
That's how screening tests are supposed to function. A negative test pretty much totally clears you. A positive test needs further workup, or a biopsy in this case, to prove that you're sick. But still the vast majority of positive screening tests are nothing. That's why you shouldn't freak out necessarily when a screening test turns out to be positive.
But more importantly, positive and negative predictive values are of little use in many, many situations. If tests are ordered improperly, especially on healthy people, they're pretty much useless.
Sensitivity and specificity are where it's at, and using them, along with Bayes' Theorem, can lead to much more powerful and helpful numbers.
(outro plays)
Healthcare Triage is supported in part by viewers like you through Patreon, a service that allows you to support the show through a monthly donation. We'd like to thank all our Patreon supporters in general, and thank our honorary research associate Cameron Alexander specifically. Thanks, Cameron. To learn how you can become a patron at Patreon.com/healthcaretriage.
But sensitivity and specificity are difficult for many people to understand. What people want to know are other values, what we might call the positive predictive and negative predictive value. But there are problems with those metrics, and they're the topic of this week's Healthcare Triage.
(intro plays)
In case you haven't watched the Sensitivity and Specificity episode or you've forgotten some of the details, here's the TLDR version. Sensitivity refers to the percentage of people who have a disease who will have a positive test. Specificity refers to the percentage of people who don't have a disease who will test negative.
That's not what people usually want to know. They want to know if they have a positive result, what percentage of people will have a disease. Or they want to know if they have a negative result, what percentage of people will be healthy. They want to know what those tests mean. To talk about this, let's go back to the box we used from our previous episode.
If we take people and we group them by whether or not they have a disease and whether or not they have a positive test, everyone falls into one of four boxes, A through D. Sensitivity is A divided by A plus C, times 100. Again, it's the proportion of people who have a disease and test positive, A, relative to all people who have the disease, A plus C. Specificity is D over D plus B times 100. It's the proportion of people who are healthy and test negative, or D, relative to all people who [don't] have the disease, B plus D.
But here in the real world, what happens is that you get a positive result in a test and then you panic. You want to know what's the chance you're sick. Looking at the box, what you want to know is that out of all people with a positive test, A plus B, what's the chance you're sick, or A. This is the positive predictive value. It's A over A plus B, times 100. Or, back in the real world again, you get a negative test, and you want to know if you're in the clear. You want to know that out of all people with a negative test, or C plus D, what's the chance that you're healthy, or D. This is the negative predictive value. It's D over C plus D, times 100.
Now if these values are what people want, why didn't I do an episode on them back when I covered sensitivity, specificity, and Bayes' Theorem, you might ask. The reason is that in many, many cases, positive predictive value and negative predictive value are flawed. They're not great, because they're very heavily dependent on prevalence.
This means that their relative values can be swayed a lot by how common the disease is in the population being tested. I want to explain this to you in a few ways. Let's start with a scam.
Let's say I have a new test for fakitus, which is our disease of choice here on Healthcare Triage. But in reality, my testing device is a coin flip. If it comes up heads, I tell you you're positive, and if it comes up tails, I tell you you're negative.
Let's say I run a study in population where fakitus is common, or 90% of people have it, and the study involves 1000 people. Here's the box. Since the prevalence of fakitus is 90%, 900 of the 1000 people have the disease, 100 do not. And since it's a coin flip, half of each group has a positive test and half has a negative test. Sensitivity is 450 over 450 plus 450, times 100, or 50%. Specificity is 50 over 50 plus 50, is also 50%. Sensitivity and specificity have made it clear that this is a scam. They know the test sucks.
But positive predictive value would be 450 over 450 plus 450, times one hundred, or 90%. That sounds great. If I told you that, you'd think that this is a great test. And let's say that you bought the test for your office, and started using it.
But in the real world, fakitus is much more rare. Let's say that only 10% of people have it, and if you saw a thousand patients, your box would look like this. Prevalence is 10%, 100 of the 1000 people have the disease, 900 do not. And since it's a coin flip still, half of each group has a positive test and half has a negative test. But now sensitivity is 50 over 50 plus 50, times 100, still 50%. Specificity is 450 over 450 plus 450, times 100, still 50%.
Sensitivity and specificity are isolated from prevalence. They're the same in pretty much every population you look at, and they're consistent. That's why we value them so much. They tell us a lot about the test.
But in this population, positive predictive value is 50 over 50 plus 450, times 100, or 10%. That's not nearly as good as the 90% we saw in the study. Positive predictive value and negative predictive value are heavily dependent on prevalence.
Positive predictive value and negative predictive value are only useful if the prevalence of the disease in question is the exact same in the study population as in the population you treat. But the populations we study are very different than the populations we treat.
If the people studied when a test is developed are sick, then the positive predictive value and negative predictive value only have meaning if we use the test on populations who are sick. This is the mathematical reason why tests shouldn't be ordered outside of those times.
We know the positive predictive value of a white blood cell test when you're sick. In other words, we know the percentage of people who have an abnormal test who are sick in that population. The positive predictive value of a white blood cell test, or the meaning of a positive result, when you're well would be totally different. That's why you shouldn't order a test unless you know for sure how it should be interpreted in the population that it's meant for. Otherwise the results, positive and negative, are sort of meaningless.
Positive predictive value and negative predictive value are also wildly different than we might think, even in the real world. Let's go back to the same example I used in the Sensitivity and Specificity video. It's a study of mammograms for the general population.
The study involved 141,284 women. Of them, 728 were disease positive. The test had a sensitivity of 83.8% and a specificity of 90.6%, which is decent. This gets you this box.
For screening mammograms in the general population, the positive predictive value, therefore, is 610 over 610 plus 13,212, times 100, or 4.4% In other words, of those who have a positive mammogram, about 4% actually have breast cancer. The negative predictive value is 127,344 divided by 118 plus 127,344, times 100, or 99.9%. That means if you have a negative mammogram, you're almost positively free of breast cancer.
That's how screening tests are supposed to function. A negative test pretty much totally clears you. A positive test needs further workup, or a biopsy in this case, to prove that you're sick. But still the vast majority of positive screening tests are nothing. That's why you shouldn't freak out necessarily when a screening test turns out to be positive.
But more importantly, positive and negative predictive values are of little use in many, many situations. If tests are ordered improperly, especially on healthy people, they're pretty much useless.
Sensitivity and specificity are where it's at, and using them, along with Bayes' Theorem, can lead to much more powerful and helpful numbers.
(outro plays)
Healthcare Triage is supported in part by viewers like you through Patreon, a service that allows you to support the show through a monthly donation. We'd like to thank all our Patreon supporters in general, and thank our honorary research associate Cameron Alexander specifically. Thanks, Cameron. To learn how you can become a patron at Patreon.com/healthcaretriage.