Evidence Based Medicine is one of those things that is a good thing, but was pushed so hard by its proponents that it ended up overemphasizing a particular kind of study as the only real way to know things in medicine.
Yes, absolutely, medicine should be evidence based. Yes, large randomized, double blind, placebo controlled studies provide a lot of information.
However, there are limitations with these kinds of studies.
First, it may not be ethical or practical to study some things in this manner. For example, antibiotics for bacterial pneumonia has not had a randomized, double blind, placebo controlled study.
Famously, there was an article discussing how parachutes in jumping out of airplanes had not been subject to a randomized, double blind, placebo controlled study. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/
Later, somebody did that study: https://www.bmj.com/content/363/bmj.k5094 and found that parachutes made no difference, but it is not applicable to any real world case where you would use a parachute.
Which illustrates the second issue with evidence based medicine. Many times, the large trials's main thing they are measuring are different than what you really want to know, or the population they studied has significant differences from the patient who is right in front of you. How to apply the results of the large study to the individual patient in front of you is still more of an art than a science.
Finally, I think there is the example from machine learning. It has turned out that instead of creating more and more rules, feeding lots and lots of data to a neural network ends up performing better in a lot of machine learning cases. In a similar way, an experienced physician who has treated thousands of patients over decades has a lot of implicit knowledge and patterns stored in their (human) neural networks. Yes, these decisions should be informed by the results of trials, but they should not be discounted, which I think that Evidence Based Medicine did in at least a small part. During my residency, I worked with physicians who would examine and talk with a patient and tell me that something is not right and to do more extensive tests which would end up unearthing a hidden infection or other problem that we were able to treat before it caused major problems. They were seeing subtle patterns from decades of experience that might not even be fully captured in the patient's chart, much less a clinical trial without thousands of participants.
So yes, these clinical trials are a very important base for knowledge. But so is physician judgment and experience.
The biggest issue, IMHO, is that clinical trials are often unethical. This is both in theory and especially in practice. I say this as a physician and clinical trial investigator.
EBM deals with this by saying ‘there is no viable alternative’, a remarkable statement of epistemological nihilism that enables much low quality snd pointless research.
Can you give an example of unethical trials where “there was no viable alternative” was what got the trial past an IRB? I’m more familiar with inverse complaints that trials are blocked by red tape and hypothetical concerns that are objectively small in actual QALY harm.
(I’m sure this varies by jurisdiction too; I have only heard bad thing about US IRBs)
IRBs do not evaluate the value of a research endeavour. They are in fact unable to do this due to lack of knowledge and expertise. They approve trials which fit the mold of trials they have seen before. Why does a clinical trial get done? The main reason is that someone, usually a pharmaceutical or device company, is willing to pay for it.
Some recent examples of problems with clinical trials:
I'd have assumed it was when they give sick people the placebo when they have a reasonable hunch (but not a published study that people trust) that the real medication would actually save them
I think of EBM as "where stronger evidence exists it trumps expert opinion" (almost all of the time). In other words, go down the hierarchy[0].
> I worked with physicians who would examine and talk with a patient and tell me that something is not right and to do more extensive tests which would end up unearthing a hidden infection or other problem that we were able to treat before it caused major problems.
Accordingly I don't view this as discordant with an evidence based medicine practice as you're not practicing in an area with clinical-trial evidence (and notably, expert opinion is evidence albeit weak). If you told me you did routine urinalysis and blood cultures on all your admissions I would view that as discordant and incorrect practice, regardless if expert dinosaur feels it saves lives in their experience.
I also view EBM as opposed to "science-based medicine". Just because something has a (theoretical) scientific basis it does not meet the standard to enter my routine practice, I need stronger evidence for that which notably does not have to be in the form of a RCT as you suggest[0].
> How to apply the results of the large study to the individual patient in front of you is still more of an art than a science.
Frankly, the practice of medicine is more art than science and EBM is a guiding principle that keeps us grounded to measurable outcomes.
This is a central point in the new book Outlive by Dr. Peter Attia. RCTs are great, but realistically we're never going to have long-term RCTs that give clear evidence on prevention of the chronic diseases that will probably kill most of us. A long RCT might measure the effect of a certain intervention over like five years but we need to think in terms of decades. No one is willing to fund those studies, and even if funding wasn't an issue the evidence would come too late for those of us alive today. So, we have to rely on weaker forms of evidence evaluated in a cost (or risk) versus benefit framework.
Yeah and the other thing is that with an advanced enough understanding of statistics you can make a study say almost anything and many physicians won’t be able to tell.
Source: My dad is a physician and I am an economics PhD who knows a lot of statistics and talks with him about these things.
Given how much evidence-free stuff is being used in medicine, it's not having been pushed anywhere close to hard enough (and the parachute example is just silly)
Another problem I stumbled over is how evidence based medicine makes it increasingly more difficult to deviate from established routines and modalities. Long time existing methodologies will by nature of having been around longer have a larger pile of evidence backing up their efficacy, compared to a new method, that might perform better, but has limited patient study data to back that up. I've seen how this stalls uptake of otherwise evident (non patient trial data based) improvements. I even seems that some manufacturers are very well aware of this, and are thus using their fortunate position of having to only incrementally improve methods at very low R&D cost.
> Later, somebody did that study: https://www.bmj.com/content/363/bmj.k5094 and found that parachutes made no difference, but it is not applicable to any real world case where you would use a parachute.
As a physician you are likely aware, but for anyone reading who isn’t: this paper is from the Christmas issue of the BMJ, which publishes “joke” studies. It’s not really meant to be taken seriously in any way.
But there is a serious point to be made, of course. This study involved jumping from stationary airplanes on the ground, which negates the whole point of a parachute (and hence, the control group survived just fine). It therefore "proved" that you don't need a parachute when jumping from an airplane, on the assumption that the results extrapolate to higher altitudes.
Nonsense, of course. But then there's a lot of randomized, controlled trials out there that are just as flawed, only in ways that are non-obvious to non-physicians, or even physicians with different specialities. "Study X proved Y" is never as straightforward as it seems to the lay public.
Yes, like all good satire there is a serious point behind it, but I think the way the GP referenced the two articles doesn’t make it clear that it’s a satire as opposed to a real example of poor/flawed EBM.
The GP didn't talk about a certain nuance and used evidence based medicine to cover all forms of science.
A double blind placebo trial is the gold standard for testing causality. We don't have to go that far. We can use a hybrid of intuition and correlational tests, we can maybe sometimes not always need a placebo. The point is the causal test comes with a lot of technical challenges and we have options for other tests that are less challenging.
There is a spectrum of correctness and rigor for science and we should know when to utilize something extremely rigorous or less rigorous. The barbaric practices you describe only come from an lack of awareness of what statistical rigor and science is.
Yes, absolutely, medicine should be evidence based. Yes, large randomized, double blind, placebo controlled studies provide a lot of information.
However, there are limitations with these kinds of studies.
First, it may not be ethical or practical to study some things in this manner. For example, antibiotics for bacterial pneumonia has not had a randomized, double blind, placebo controlled study.
Famously, there was an article discussing how parachutes in jumping out of airplanes had not been subject to a randomized, double blind, placebo controlled study. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC300808/
Later, somebody did that study: https://www.bmj.com/content/363/bmj.k5094 and found that parachutes made no difference, but it is not applicable to any real world case where you would use a parachute.
Which illustrates the second issue with evidence based medicine. Many times, the large trials's main thing they are measuring are different than what you really want to know, or the population they studied has significant differences from the patient who is right in front of you. How to apply the results of the large study to the individual patient in front of you is still more of an art than a science.
Finally, I think there is the example from machine learning. It has turned out that instead of creating more and more rules, feeding lots and lots of data to a neural network ends up performing better in a lot of machine learning cases. In a similar way, an experienced physician who has treated thousands of patients over decades has a lot of implicit knowledge and patterns stored in their (human) neural networks. Yes, these decisions should be informed by the results of trials, but they should not be discounted, which I think that Evidence Based Medicine did in at least a small part. During my residency, I worked with physicians who would examine and talk with a patient and tell me that something is not right and to do more extensive tests which would end up unearthing a hidden infection or other problem that we were able to treat before it caused major problems. They were seeing subtle patterns from decades of experience that might not even be fully captured in the patient's chart, much less a clinical trial without thousands of participants.
So yes, these clinical trials are a very important base for knowledge. But so is physician judgment and experience.
Source: I am a physician.