One interesting twist on the doctor example: we know she is almost always right that it's nothing, that it will all go away and that just two aspirins are ok. The article correctly points out that she will miss one or two cases where it was a terrible disease instead, and that her prescription of aspirin and to go away and rest will be misguided.
However... doctors must do more than just cure you. They must also "do no harm"; in fact that is (or should be) their default. What if she intervened more directly in more cases, maybe poked and prodded and recommended more invasive treatments? She would get more cases wrong in the opposite direction (recommending a potentially invasive or even harmful treatment when some rest and an aspirin would have sufficed), maybe resulting in accidental death through action rather than inaction.
She must be alert, but hers is a good default/heuristic. It's not the same as a rock with "TAKE ASPIRIN" written on it.
And this is just an example. I think the heuristics that work 99.9% of the time do so because they do indeed work. Erring in the opposite direction can, in some cases, be also harmful.
The problem is that people are not statistics. It may sound reasonable on the surface to say that this heuristic minimizes harm on average because she doesn't perform unnecessary interventions on the 99.9%. However, there are still actual human beings in the 0.1% who are harmed. What you're really saying is that if a group of people is small enough, it's fair for them to suffer preventable harm if preventing it would expose the larger group of people to risk.
I'm not going to argue about whether that is true or not, because I think that clearly depends on many factors and may be unanswerable. But as a member of a minority group who is often denied health care, it is often denied for this very reason. If the wrong person is prescribed this treatment, it is harmful. I'm just saying that when you're in the 0.1%, it can be difficult to accept the idea that you have to sacrifice yourself because someone in the 99.9% might be at risk otherwise.
There are an unfathomable number of possible things that could be wrong with you at any one time. All of those might not be present in 99% of people and present in 1%.
But the 1% is not the same for every disease. If you perform unnecessary interventions on everyone for every disease, then you also perform unnecessary interventions on the 1% of every disease for all of the other diseases that they don't have.
Now you've given everyone weird cancers because you've done thousands of x-rays and CT scans for all manner of things.
You have no way to know if you're in the 0.1%. It's not written on your body anywhere. So if an early test can save 1/1000 from dying, but the false positives from an early test kill 3/1000, false positives are more dangerous than the disease you may or may not have.
I am arguing that the doctor must remain alert and not be lazy (though doctors are often overworked and tired, but that's a different problem), but that her default of "aspirin and rest" is a good one.
This article is a little misleading because it conflates a few things.
The low base rate prediction problem is a problem not just because of lazy application, it's because the numbers make it impossible to do anything else in some situations. With a low enough base rate, you have to have a preternaturally good indicator to make anything but a negative prediction.
Then you have to resort to utility theory and decide if false positives are worth the cost.
Incidentally, the hiring example is poor because it's just not the same situation at all. The fact he's equating it to the other scenarios maybe says as much about the real problem as the scenario does itself.
I think this is a little different - it's "doing the math." The best contemporary example I think is early/frequent mammograms/checks for prostate cancer. If we check how often what we can detect will develop into a threat to health, and compare that to the consequences of treatment at that early stage, we may determine that under some conditions the results of our diagnostics kill more people than the disease would, and therefore we shouldn't do them.
That's different than not treating people at all - even if it's not treating people at all, because the reason you're not doing it is because your diagnostics and treatments are inadequate.
Not to mention that "take two aspirin and let me know if it doesn't get better" is probably literally the best way to handle a case that could be something serious, but there's no way to feasibly discern yet whether it is something serious. Obviously the specifics of this type of scenario can change over time, e.g. if MRIs or CT scans became very cheap and easy to administer, I suspect they would become part of these routine exams.
Agreed. I'm a hypochondriac who can manifest all sorts of symptoms simply by reading them on the Internet. To my happiness, I finally found a doctor who calls me on my bullshit. As a result I've had fewer unnecessary tests and medications over time. Maybe at some point we'll both laugh off something that actually is serious. Which one will impact my longevity more? I can't tell you.
It's kind of like driving. You can become increasingly lackadaisical because you haven't had an accident recently, which invites accidents. You can become an excessively nervous driver because you perceive all possibilities, which invites accidents. Pretty much everyone who stays on the road an appreciable amount of time develops some balance between those two extremes.
However... doctors must do more than just cure you. They must also "do no harm"; in fact that is (or should be) their default. What if she intervened more directly in more cases, maybe poked and prodded and recommended more invasive treatments? She would get more cases wrong in the opposite direction (recommending a potentially invasive or even harmful treatment when some rest and an aspirin would have sufficed), maybe resulting in accidental death through action rather than inaction.
She must be alert, but hers is a good default/heuristic. It's not the same as a rock with "TAKE ASPIRIN" written on it.
And this is just an example. I think the heuristics that work 99.9% of the time do so because they do indeed work. Erring in the opposite direction can, in some cases, be also harmful.