Previous: Test Characteristics: How Accurate was that Test?
Next: Is Marijuana Harmful to Health?



View count:74,644
Last sync:2022-11-10 01:00
Last week we discussed test characteristics like sensitivity and specificity. This week, we'll talk about how those concepts get used in actual medical decision making. At least, we'll talk about how they're supposed to be used. You'll learn about Bayes' Theorem, too! Don't let the equations scare you. You're going to learn a ton.

Likelihood Ratios:

Fagan's Nomogram:

John Green -- Executive Producer
Stan Muller -- Director, Producer
Aaron Carroll -- Writer
Mark Olsen - Graphics
Last week, we talked about test characteristics like sensitivity and specificity. But while those things have meaning, it's not always clear how they translate into actual practice. To understand that, we have to talk about likelihood ratios and Bayes' Theorem. Stick with me--you're going to be glad you did. This is Healthcare Triage.

The problem with tests is that we think they're definitive. They're not. We think that positive means you have to have a problem, and that negative means you're safe. I showed you last week that lots of times you have false negatives and false positives, so it's a mistake to take a test and use that (and only that) to reach a conclusion. This kind of thinking completely ignores what we think about a patient when she walks through the door. You may remember "fake-itis" from last week. Maybe when we see our patient we're really sure she has "fake-itis". Then, a negative result should leave us concerned that it is a false negative. Maybe we thought there was almost no chance that she had the disease. In that case, a positive test may be more likely to be a false positive. Or maybe we weren't sure. Then either result might be important.

Bayes' Theorem, attributed to (get this) Thomas Bayes, takes this idea and turns it into a formula. Mathematically it says that the probability of A, given that B is true, is equal to the probability of B given A, times the probability of A, divided by the probability of B.

Hold on, hold on. Stay with me.

In this case, A is having "fake-itis", and B is a positive test result. So Bayes' Theorem says that the probability of having "fake-itis", given a positive test result is equal to the probability of having a positive test result if you have "fake-itis", times the probability of having "fake-itis", divided by the probability of having a positive test result. And I know that that's still complicated, but you don't really need to do the math here. You just need to understand the principle. You look at a patient. You determine based on her story and physical exam what the chance is that she has "fake-itis". You get a test. And then you do a calculation where you have a new chance that she has "fake-itis". And it involves sensitivity and specificity. You use them to calculate a likelihood ratio.

A likelihood ratio is the probability that a test is correct, divided by the probability that it is incorrect. They come in two flavors: positive and negative, to assess the value of a positive and negative test result. A positive likelihood ratio is equal to sensitivity over one (1) minus specificity. A negative likelihood ratio is one (1) minus sensitivity over specificity. Bayes' Theorem works out that post-test odds equal pre-test odds times likelihood ratio. And I grant you that odds are not the same as probabilities--odds are the probability of something being true, over the probability of something being false.

I know you're likely overwhelmed, so let's work through a real-world example. Last week, we found that mammograms had a sensitivity of eighty-three-point-eight (83.8) percent, and a specificity of ninety-point-six (90.6) percent. So first, we can calculate our likelihood ratios. Positive likelihood ratio equals sensitivity over one (1) minus specificity. That means it's zero-point-eight-three-eight (0.838) over one (1) minus zero-point-nine-zero-six (0.906), or eight-point-nine (8.9). A negative likelihood ratio is one (1) minus sensitivity over specificity. Or one (1) minus zero-point-eight-three-eight (0.838) over zero-point-nine-zero-six (0.906), or zero-point-one-eight (0.18).

Now let's assume a woman comes into the office. She's really worried about breast cancer. We know, based on research and that paper, that about half a percent of women in this population had breast cancer. So the pre-test probability, based just on prevalence, is one-half (1/2) of one (1) percent. That means there's a ninety-nine-and-a-half (99.5) percent chance that she doesn't have breast cancer. So her pre-test odds look like this. Odds are the probability something is true over the probability something is false. In this case, zero-point-zero-zero-five (0.005) over zero-point-nine-nine-five (0.995), or zero-point-zero-zero-five (0.005).

And here we see one of the neat tricks about pre-test odds that makes life easier. For most pretty rare things, pre-test odds pretty much equal pre-test probability. Even for things like a pre-test probability of ten (10) percent, the pre-test odds are eleven (11) percent. So a lot of the time you can just estimate pre-test odds to be the pre-test probability.

So let's say the mammogram is positive. According to Bayes' Theorem, post-test odds equals pre-test odds times likelihood ratio. In this case, post-test odds equals zero-point-zero-zero-five (0.005) times eight-point-nine (8.9). The post-test odds therefore are zero-point-zero-four-five (0.045), or four-and-a-half (4.5) percent. You can convert this back to a probability by the equation probability equals odds over one (1) plus odds. Probability in this case equals zero-point-zero-four-five (0.045) over one (1) plus zero-point-zero-four-five (0.045) or four-point-three (4.3) percent.

Let that sink in for a second. The average woman who has a positive mammogram has a four (4) percent chance of having breast cancer. There's a ninety-six (96) percent chance that she doesn't have breast cancer. And yet every woman I know who has a concerning mammogram immediately freaks out. That's because most people think that a positive mammogram means you have cancer. It doesn't it means that you have a four (4) percent chance of having cancer. I bet most of you thought you had a four (4) percent chance of having cancer before the mammogram was even done.

Now this changes based on clinical suspicion. If you had a lump or a really concerning history, such that your doctor thought you had a pre-test probability of thirty (30) percent, instead of the background half of one (1) percent, then your equations change. Your pre-test odds would still be probability of something being true over probability of something being false. In this case, zero-point-three (0.3) over zero-point-seven (0.7), or zero-point-four-three (0.43). A positive mammogram would mean this--the post-test odds are still the pre-test odds times likelihood ratio, or zero-point-four-three (0.43) times eight-point-nine (8.9), or three-point-eight (3.8). And we still convert back to probability the same way. Probability equals odds over one (1) plus odds, or three-point-eight (3.8) over one (1) plus three-point-eight (3.8), or a final probability of seventy-nine (79) percent.

This woman with a positive mammogram would have almost an eighty (80) percent chance of having breast cancer. This is why mammograms can be a really powerful tool for women who are at high risk, but of debatable value for women who aren't.

As a thought experiment, I want you to see what a negative mammogram would have meant for this woman. Let's start at the beginning again, remembering her pre-test probability is thirty (30) percent. We remember that her pre-test odds were zero-point-four-three (0.43), and we use Bayes' Equation again, but with a negative likelihood ratio, since the test is negative. Post-test odds equals pre-test odds times likelihood ratio, or in this case zero-point-four-three (0.43) times zero-point-one-eight (0.18), for a final post-test odds of zero-point-zero-seven-seven (0.077). Going back to a probability, it's odds over one (1) plus odds, or zero-point-zero-seven-seven (0.077) over one (1) plus zero-point-zero-seven-seven (0.077), for a final probability of seven-point-one (7.1) percent.

So after a negative mammogram, this woman still has a seven (7) percent chance of having breast cancer. This woman probably feels reassured with her negative mammogram. But her chance of having breast cancer after the negative test, at more than seven (7) percent, is about twice as high as the first woman with the positive mammogram, at about four (4) percent. But that woman's probably freaking out, and she's at like half the risk.

This is because people don't think about tests appropriately. Unfortunately, too few doctors do as well. This is because this isn't the way we're trained to think. We think that positive means you've got disease and negative means you're safe. It's just too bad.

Now, some of you may complain that although this makes a lot of sense, it's just too hard to calculate. So I'll show you a shortcut. This is a Fagan's nomogram. You start by finding your pre-test probability on the left. Then you draw a straight line through the likelihood ratio, and wind up at the post-test probability. Once you know the likelihood ratio of a test, it's easy to use.

So for the mammogram, the positive likelihood ratio was eight-point-nine (8.9). So let's try it for a few numbers. First is the one we did at zero-point-five percent. Drawing a line from there through eight-point-nine (8.9) gets us to about the four (4) percent we calculated. You can do this for any pre-test probability. You can also do this for a negative mammogram, with a likelihood ratio of zero-point-two (0.2).

And there are studies which list the likelihood ratios of many, many tests. The Center for Evidence-Based Medicine in Toronto has tables of them at their website (link down below).

Go knock yourself out, and see what a positive and negative test really means. And stop assuming that a positive test means disease, and a negative test means you're clear. That's not how it works. The vast majority of us are doing it wrong.