Great article. This part especially rings true to me: "If researchers have an ideological bent, a meta-analytic null may just be an expression of the typical sentiments of researchers".
When I was in academia, it was increasingly the case that my peers thought of research less as a way to determine the truth, but just as a method to influence policy and public opinion. If we thought something was 80% likely to be true, there was pressure to "close ranks" and pretend as though it was 100% true, and to avoid publishing anything that contradicted it. It is also well known that papers that support certain "sides" tend to be easier to publish (and in higher ranked journals), plus can yield more media attention. See for example, this fraud in sociology - https://en.wikipedia.org/wiki/When_contact_changes_minds.
This may be better in the natural sciences, but in social science you should not trust any paper unless you read through and fully understand the methodology. Any non experimental results has so much wiggle room in the modeling methodology that it's easy to generate any result you want. The actual percentage of papers with credible results is very low, much lower than laypeople think.
Once, years ago, I got into a debate with somebody online who just kept dropping links to studies after making a statement as if it proved them correct.
One day, I got bored and decided to actually read every single study. The studies said nothing close to what the debater suggested and seriously opened my eyes to just how terrible some studies can be in terms of quality.
Now, I just assume that anybody who starts link bombing in a conversation has no idea what they're talking about and can't engage on a logical discussion.
Yep. This is why it makes me nearly physically ill when I hear people use terms like "science denier" or say things like "<insert political party here> don't believe in science" or "our laws should be based on science".
It is invariably some of the most scientifically illiterate, ideologically entrenched, and intellectually lukewarm people who spout this garbage as a retort to any sort of argument with which they do not wish to engage.
The tricky bit is there really are people who think that science is "just an opinion, man". See e.g. "teach the controversy" regarding evolution vs. creationism. There's certainly a place for "<X> don't believe in science" and "our laws should be based on science".
But the argument is so often misapplied that it's become meaningless.
I have argued against 5G deployment for example, not based on outlandish "zomg Bill Gates George Soros mindcontrol!!!11" or "brain tumours!!!11" nonsense, all of that is clearly nonsense. But the science is a lot less clear that there are zero effects than is sometimes made out to be, and there are also the ethical considerations of informed consent. I was, of course, immediately lobbed in with the crazies and called a science-denying conspiracy theorist, by someone with no expertise in the field who said I need to "listen to the science", in spite of my argument looking nothing like the anti-scientific nonsense from David Icke and the like.
Don't even get me started on COVID – any attempt to inject even the slightest sort of nuance was met with "you are literally murdering people with your unscientific nonsense!" and you were immediately lobbed in with COVID-denying anti-maskers or whatnot. At some point this stopped being a debate about trade-offs involving science and medicine on one hand and basic liberties and freedoms on the other and became some sort of moral crusade (and it seems it still is; there was a conference this month where masks at all times, full proof of vaccination, and a PCR test was still required, which seems a bit much for 2023).
A big issue is that any sort of nuance is often met with the most uncharitable interpretation because the genuinely crazy people have been getting so much attention.
The thing is, based on so many examples of people who cite “the science” that are clearly exaggerated or unsupported “just an opinion” isn’t too far off.
It shouldn’t be this way, but there’s a lot of undermining of trust in science because of this stuff.
Take evolution as an example. How many layers of scientific expertise have to be understood to really claim that you understand how evolution works? Archeology, biology, history, geology, genetics…potentially more?
At some point there’s a trust factor involved in accepting evolution.
Now apply that same realization with climate change.
The more complex, the more moving parts, the easier it is to find a part to be skeptical about and people will do exactly that. Especially if they are given reason to believe that the science is just there to support a political objective.
In the end, unless science can be easily replicated and demonstrated (gravity, boiling water, killing bacteria, generating power, flywheels, etc) it will boil down to trust for the vast majority of people.
Your last point is critical. People at the end of the 20th century had come to “trust the science” because it conferred tangible power on those who wielded it. “Science” could send a man to the moon or a bounce a phone call off a satellite to the other side of the world. Your average person doesn’t have to understand the rocket equation, or trust NASA. They can watch a launch in Florida and see with their own eyes the awesome power of “science.”
Then, folks started invoking the authority of “science” in connection with disciplines that don’t confer tangible power. For example, if “education science” worked, we would know it. It would convey power the results of which people could see with their own eyes without needed to pore through studies, or putting any faith in “education experts.”
I've come to the conclusion that science needs to be just another form of religion (without the theistic element). I don't have the time or the energy to go through the research to determine if climate change is real. I put my faith in the scientific process, which is probably the most successful thing we've ever come up with as a species. I'm not sure why I should believe some random dude, whoever he might be or what credentials he might have, on the radio/TV over the scientific community. Sure, they've gotten things wrong, but their success rate and their usefulness to our species is infinitely better than some politician being a climate change denialist just to appeal to voters. What process does he have and why should I trust it more than the scientific method?
I disagree with that attitude. I think most scientific information that comes to us through the media is reliable. If people stop trusting science (or they continue to stop trusting science), we are just left with superstition and religion. It was really hard for people to cope with the fact that during covid we started with the best explanations and then as we learned we improved on it and some ideas or expected safety practices changed. Good example is that many viruses spread through touch and covid spread through the air and that was a surprise.
People take that in and say I just don't trust anything. That is a problem with America because people stop believing in objective facts, saying that it goes against their beliefs. This is a major problem why America doesn't have enough engineers and scientists and mathematicians, because people haven't learned that your intuition can be wrong and you can overcome your strong expectation about something by studying something, debugging a program or whatever.
Yes. Even if the original information is correct it tends to be quite distorted by the media (the headlines are almost always hyperbolic), let alone when it gets picked up by the public and repeated.
It depends on how you define the “scientific community.” If your kid has a staph infection, there are people who make antibiotic creams that will make it disappear in days. They’re scientists. Their science gives them power you don’t have to “believe in,” because you can see the results.
Most people with a Ph.D. in a field with “science” in the name aren’t scientists. They work in fields that don’t have the same level of rigor as nuclear physics. (My degree is in aerospace engineering, and many real scientists would consider us bumpkins in how comparatively undisciplined our field is compared to their’s.) Those fields confer little to no power to produce tangible and undeniable results.
Like how North Korea is a ‘democratic republic.’ If you have to put ‘science’ or ‘evidence based’ after your discipline, it’s probably not science (or science with lots of problems).
See:
Social science
Political science
Evidence based naturopathy/chiro
You're right about this. And it makes discussion, much less "debate", so difficult. I'm pro-5G but willing to entertain the anti-5G evidence because what if there was some and it was true or at least unexplained?
There's another important part of this though, oil companies pay scientific researchers to publish views against the veracity of climate change, research and predictions. That's why they say it's science denialism because those people are making shoddy arguments. Disagreeing about whether you care about global climate change is not science denialism but making up things to say there's no science is denialism.
yes certainly science can inform policy decisions. How do I put this...
Science has nothing to say about what we should set as the objectives of our policy. Science can, however, inform our approach to attaining that objective.
My favorite term like that was “scientific consensus”. I used to point out that up until the 17th century the scientific consensus was that the Sun revolved around the Earth. I would get nothing but blank stares in response. It turns out that people who talk about “scientific consensus” typically don’t know whether the Earth revolves around the Sun, or vice-versa.
Yeah, but it means they're far more likely to be correct. Iconoclast / Galileo gambit outliers are near nil.
We collectively know more about everything every year, science has generally given us a self-correcting living corpus.
The argument you're making is the same made by anti-intellectuals: don't trust institutional knowledge and consensus because it's been wrong in the past, look at mistake X. It's something like a systemic ad hominem.
They are not near nil. Every idea ever conceived has either been rejected or altered to match new observations.
That means that most ideas will end up being wrong in some way. The smoking-gun laws of nature are few and far between.
If you consider what % of ideas throughout history were plain wrong, it’s hard to believe that we have finally figured it all out and the consensus finally reflects reality most of the time.
We're talking about wrong as in Galileo, as in radically upending, not modifications or normal gradients of wrongness. Knowledge is generally accretive, but we pay a lot of emotional and categorical attention to perspectives that break dogma. The vast majority absolutely do not, and they fade innumerably into the background of the industry they support.
It's easy (and trite) to cite the extremely limited pool of successful dogma-contrarians over time. Try making a list of the failed contrarians, and then another of all the knowledge (and its contributors) that didn't need fundamental reconsideration at some point in modernity.
The critical point is that contrarians are always necessary for science. If we always ignored dissenters then we would never make any progress. Even if most are wrong, contrarians are essential.
Thus deferring to the ‘consensus’ and implicitly ignoring contrarian views stops science in its tracks.
I don’t understand what you mean in your first paragraph, and I don’t support dogmatic positions so I’m not sure why you refer to ‘dogma-contarians.’ Contrarians are usually anti-dogmatic.
“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore, all progress depends on the unreasonable man.” Unreasonable contrarians are generally annoying but essential.
The fallacy is assuming that consensus = probably true.
This is only correct when we have good evidence and good methods. That is why consensus breaks (like heliocentrism) occur during technological changes (development of telescope).
The critical point the parent is missing is that many fields have terrible data and terrible methods so the ‘consensus’ isn’t likely to be true (eg smoking is good for the nerves, butter is bad for you, the globe is too big for humans to cause changes)
It takes a lot of intellectual integrity to admit that many disciplines are not generating anything resembling ‘truth’. It’s easier to ignore and say: but the consensus is the best we have! Sometimes the heterodox view is the best.
It is especially frustrating when scientific ideas are applied to moral dilemmas. I see this from the far left wing frequently. The irony is that this same mindset is what led to the proliferation of eugenics apologists in the 1800s. The scientific consensus had absolutely no objections to eugenics on scientific grounds. In fact, "science" would seem to support it. Yet it is widely agreed that this practice is morally reprehensible nowadays, because humans were able to put aside their hubris and apply their moral reasoning.
Plenty of people realized that eugenics was unacceptable who were in the scientific world then, just like plenty of people understood that slavery was wrong. Which doesn't take away from the fact that plenty of terrible policies came out because leaders did have those views. I don't know what left wing ideas you've seen that are wrong, I'm sure that I've seen some left wing ideas that are wrong. I'm more worried about what do the leaders and the effective speakers in groups promote. Saying that I'm making you more free by taking away your choice of library books, or your choice about medical care, that's not an extreme view that only 1% of people have. It's pretty much the consensus leadership view of Republicans.
This is one reason Wikipedia discourages using primary sources (i.e. individual studies) as references. Not that using secondary sources is perfect either, but you can "prove" almost anything by linking to studies as there are just so many of them. Do 1,000 studies on homeopathy and some of them will show a positive effect (even when done well, ignoring there are also many bad studies on these kind of topics).
> One day, I got bored and decided to actually read every single study.
Nothing is more entertaining than the meeting after a business presentation where someone fact checks the reference after the power point. When we have vendor presentations, we started doing this because... so many claims, so little truth.
Maybe assess the argument by the first two studies. If either of them do not support the point (or as has frequently happened for me, actually oppose the point) then you can reply for posterity and no one else needs to replicate your debunking effort.
“Your first two studies not only don’t support your argument but invalidate it, and I stopped reading after that.” ends most disagreements.
You will have probably done more work than the poster who often has just discovered keyword searches and related articles in google scholar or science direct, but everyone will learn something.
I'm not saying you're wrong — I've got my name on exactly one paper so what would I know — but that being true would suggest that peer review is fundamentally broken.
No, it would suggest that every paper has something in it that can be considered a fatal flaw by a person biased against it. Peer review is precisely supposed to admit this and minimize flaws in conclusions given inevitable flaws in methodology.
Sort of a research paper equivalent of:
"If you give me six lines written by the hand of the most honest of men, I will find something in them which will hang him." by Cardinal Richelieu
Peer review is not a certificate of truthfulness, peer review just says that a grad student reading the paper did not found anything too suspicious there and it looks like a typical paper in that field.
Peer review, at its core, is a social consensus process. It can work, but it is structurally inclined to propagate agreement.
I wonder if it serves to reinforce this problem, as those more motivated to reinforce certain literatures or perspectives will surface those repeatedly in reviews.
There's a popular work that comes to this conclusion in its beginning chapters [0]. The basic premise is that the act of science and knowledge acquisition cyclically devolves into ideology and is then disrupted. Typically those who initially disrupt a scientific dogma are not treated well. Eventually the old guard literally dies off and the new ideas can begin to take hold.
[0] The Structure of Scientific Revolutions, T. Kuhn
> The basic premise is that the act of science and knowledge acquisition cyclically devolves into ideology and is then disrupted.
This can happen of course, but it's not really what Kuhn is talking about. Kuhnian paradigm shifts are to science what massive refactorings are to a codebase: the problem isn't that the old code was wrong (though it might also be wrong), it's that it can't be extended to meet new requirements. And while some of us are better at writing extensible code than others, no one gets it right every single time.
Not even that. I was a grad student that reviewed a paper submitted to a well-known journal from a heavyweight in my field. The paper's own data showed that the authors were reporting 95% noise. My advisor rejected the paper only to see it published a few months later (in another famous journal) after the authors removed the data that allowed us to detect the noise throughout the data.
2nd grade we learned the scientific method. I don't remember peer reviewed being a thing.
Replication was critical.
Also, I've seen peer reviewed papers with atrocious stuff in them, unless I know the peer, it doesn't mean much to me. Give me 100 independent groups coming to the same conclusions. That has far more weight than a few of your friends/professors reading your paper.
I mean I think peer review kind of is fundamentally broken, but at the same time this phenomenon isn't necessarily a sign of it.
Even within technical papers, let alone pop writing or internet discussions, many of the claims citations are provided for are broad e.g. "X can increase people's anxiety". A very rigorous study might exist showing that when X happens a specific way it does increase some form of anxiety in some specific subset of the population, but using it as a citation for the broad statement without further context can be misleading at times.
It's in fact possible that by considering different subsets of the same statement you'd get an opposite directional effect. That can certainly be used to confuse people, especially in a situation where it is unlikely most readers will dive deeply into the cited works.
As far as nit picking - the same general principle applies. There will always be some tradeoff between scope, rigor, and available resources to do the study. If we waited for papers to be perfect in both correctness and interestingness/utility there would be almost nothing to ever publish.
So I think the systemic problem here is moreso an undervaluing of review article and text book type resources (both reading them and writing them) in favor of vomiting out random individual paper citations for whatever claim. Science needs more heterogeneity in the roles different PIs fill for the system.
Improving peer review process would be great (and might indirectly help), but I don't feel it's the root.
Not exactly. Control for that in a future experiment?
To be fair, as you start to leave chemistry, true knowledge(the goal of science) is impossible to find. The best we can get is little glimpses of the truth.
To be fair, I have seen studies that were pretty bullet proof. It just seems that most are lazy or simply impossible to prevent variables.
I'll often read through the studies in an argument but it is very unfeasible to keep up with these people because they can just keep throwing crap at you, and if you disagree with the study, they'll throw another at you until you give up. I remember reading through one 80 page paper on how a public option for healthcare would bankrupt us, but if you actually read through the thing, it never accounts for all the money saved by not being spent on private healthcare (which would fund the entire thing). My local power company had a similar situation where they posted a huge study explaining why solar was costing them a bunch of money, but if you actually read the study it explains that it considers lost income from solar as costing them money (in the same way that moving to a more efficient AC system would "cost" the power company money). It's all bullshit, and it's too easy to abuse studies to defend your position.
If someone replies within a few minutes with a link to a study, you can usually just assume it doesn't even support their point unless they're actually a researcher in the field.
It takes way longer even for experts to read and digest a paper thoroughly to evaluate the methodology and results.
Googling for 5 minutes and skimming the abstract for keywords that might be related to whatever you were arguing about is usually the norm online.
It takes years of training and feedback to learn to properly evaluate a paper in your own field, and that's after years of undergraduate education at least. Most people online skip even the basic textbook level background and think they understand what they're reading.
mRNA vaccines are completely safe - Obviously false just like it is false for any vaccine since there are always going to be rare side effects in some members of the population. They very well may be just as safe as any other vaccine, but the truth is there are some things we won't know until a lot more time goes by. You can't say what the long term side effects are of something until it has been around for a long time.
Wind power is better for the environment - Similar to above, we don't really have data on the full lifecycle environmental costs of large scale wind farms. This is especially true because a good portion of the impact depends on how long the turbines last, what type of repair/recycle technologies are developed, and what type of policies get applied to recycling them. We see some things that look promising but others that are concerning.
Saying "mRNA vaccines are completely safe" is obviously not pedantically true. The same could be said of anything, not just mRNA vaccines. Nothing is ever absolutely 100 % safe.
It would, at least in theory, be better to realistically say what we knew and didn't know about the risks.
But public communication about those kinds of risks and tradeoffs is hard. If you say there might be any kind of a risk at all, some people are going to get hung up on that or grossly overestimate those risks compared to the benefits (or to the risks of not getting the vaccine -- also not something everyone would be affected by but clearly a non-zero risk).
Taking that into account, "completely safe" might be a better approximation than most other ones you could make.
While I understand what you are saying, I would disagree. Saying that something is completely safe when it isn't (as you say nothing is completely safe) makes it appear that people are being lied to when someone does have some type of reaction.
I think we are better of thinking of risk in terms of every day risk management. Comparing risks to things like driving 10 miles, getting hit by a meteor while sleeping, etc. can help create a better understanding of the risks.
I'm not sure I agree with it either. I'd personally much rather take a realistic estimate (in cases we have one anyway) than a simplified half-truth. But I think I can see a rationale for why some people doing the communicating may opt for the latter.
Comparing to something else that people might have a more realistic intuition of sounds like a good idea.
It is sort of like how when some people want to prove something is true they quote the Bible. I find the behavior especially strange when they are quoting the bible to prove the bible is true.
Sociology lives in an uncomfortable place at the intersection of "important" and "difficult". Forming a rigid experiment on human beings is vastly more difficult than a simple thing like an atom or a mineral, and so much worse with groups of human beings.
STEM people like to dismiss it because it doesn't produce the same kind of rigid results and is therefore useless. But sociology is nonetheless important. Like it or not, we have to make decisions about how the world will run, and the fact that we don't have perfect information doesn't let us opt out of that.
Combine that with all of the usual human failings -- pettiness, meanness, closed-mindedness, greed, etc -- it sounds impossible. But it does, slowly, gather data and formulate theories.
This isn't made better by the fact that most STEM people imagine they can read primary source material and understand it -- a mistake they wouldn't make for a "hard science" in a different discipline. Like every field, the frontiers are based on huge amounts of background material, which is even more vast for a field that's more complicated than the nice, neat laws of physics or chemistry.
None of that excuses those human failings of sociologists. They need to do better, and hold each other to account (and not just to foment their own failings in their place). But we do need to recognize that this work is important. The world is complex and difficult and we'll make better choices if we try to understand, rather than dismiss it as unknowable.
While I agree that there is some work in the field that is important, there is an absolute deluge of ideologically driven garbage. There is also a lot of garbage of the "yeah, duh, of-fucking-course" variety. studies like "negative interactions with community reduce feelings of belonging" ... uh yeah, no shit sherlock.
And then even the more important stuff cannot be investigated in the same rigorous manner as other scientific disciplines.
I think we just need to stop calling sociology papers "scientific". Fundamentally, they are not, and should almost always be taken with a massive grain of salt, and they damn sure shouldn't be influencing public policy decisions to the degree that they currently do.
> There is also a lot of garbage of the "yeah, duh, of-fucking-course" variety. studies like "negative interactions with community reduce feelings of belonging" ... uh yeah, no shit sherlock.
I think these studies are useful, first of all to test whether it's
actually true because sometimes "well du'h" turns out to be wrong, but also to
quantify the exact effects. Is it a large effect or a small effect? How large
exactly? Which factors exactly contribute to this effect? What exactly is the breakdown of the effects? It might be possible that 20% of the people are effected by it and 80% of people are not; or perhaps everyone is effected by it.
There's often all sorts of non-obvious nuance that's possible, which can be very significant.
Yes, that is true. However, I would posit that the actual result of the most well researched, scientifically backed, rigorous results in all of sociology essentially amounts to empowering governments and private companies to produce more effective advertising, propaganda, etc.
That seems overly cynical. At the end of the day sociology is like any other science: "find out more about the world". Doing that is rarely a bad thing.
> There is also a lot of garbage of the "yeah, duh, of-fucking-course" variety. studies like "negative interactions with community reduce feelings of belonging" ... uh yeah, no shit sherlock.
The importance of these studies are often not the expected results, but the average magnitude of the effect, and the parsing out of confounders.
Do chronic minor negative interactions have more of an effect on feelings of belonging than a single major negative interaction? This has policy implications.
What are the effects of negative interactions on highly social versus highly non-social individuals? What sorts of coping mechanisms do these two very disparate groups use to deal with negative community interactions?
I can think of a bunch more questions answerable by this type of research that don't have obvious answers. The importance of these questions depend on the magnitude and confounders of the original question.
> I think we just need to stop calling sociology papers "scientific". Fundamentally, they are not
They are, or are capable of being, as scientific as Darwin's crude observations of finch phenotypes.
I also do not regard Darwin as particularly scientific, though he broke open some flood gates for very scientific research.
And your comments on policy implications are precisely what I am saying we should avoid - Why would we set policy based on studies which 1) are not scientifically rigorous (based on self reporting, surveys, small population, low ability to control confounding variables, etc) 2) do not actually suggest that policy would be effective in rectifying the problems identified in the study and 3) do not necessarily identify problems (e.g. is it really the business of the government to set policy with the aim of optimizing some self reported individual metric, such as feelings of belonging?)
We shouldn't. If self-reports are ever used to set policy (and they should be) this should be on an individualized, ad hoc basis.
If a squeaky wheel comes to you, oil it. Maybe ask around if there are other squeaky wheels that no one in power is paying attention to. Oil them too. But don't go around oiling every wheel as a matter of policy, as you will end up with a bunch of overly oiled wheels having problems from over oiling.
Plenty of sociological studies (such as education interventions) aren't based on self-reports, but tested results.
> is it really the business of the government to set policy with the aim of optimizing some self reported individual metric, such as feelings of belonging?
The general governmental purpose here wouldn't be to make everyone feel like they belong, but to decrease as much as possible mass shooters, abusers, and the like. And to make it easy for people to report problems they are having, or for outsiders to discover problems. The Turpin case could have been nipped in the bud if the adults and children who noticed how unkempt and smelly the oldest daughter was when she was briefly publicly schooled had intervened (https://en.wikipedia.org/wiki/Turpin_case ).
The problem I have with your take is you mistake the voting population as a bunch of libertarians, something that is a very commonly held viewpoint here on HN. They are not.
Voting blocks have what they see as problems they want to change. If you come at them with sufficient evidence for a plan that may work to change the problem, there is a good bet they'll vote for it. If you decide science is too hard and that we shouldn't do that silly science stuff, they will line up right behind the next authoritarian that says 'make america simple again' and enact devastating plans that they believe will solve the problem.
I mean, you can get in front of the voting block and tell them that status quo is just fine if you like, but expect it to be hard work and don't expect much success.
I think I've learned the most about life as a citizen from the sociology lessons and a book written in late 70's or 80's. The stuff they've concluded then still stands today and I'm sure will still stand as long as there are humans. Such and eye-opening discipline and I remember it with fondness. Always surprised when people start bashing sociology for reasons unknown to me.
It is not better in natural sciences. Alzheimer’s disease treatment based on the amyloid plaque hypothesis being the prime example. Basically researchers closed ranks and doled out grant money disproportionately to the amyloid hypothesis researchers.
This is sort of like adding momentum to a gradient descent algorithm, though. It is rational congeal support around a plausible hypothesis to see if it pans out rather than pursue all hypotheses (whatever that would even mean) in a desultory fashion. In the case of amyloid plaques there may have been some academic misconduct involved as well.
Sometimes I feel like non-scientists have this idea that scientists should be super-rational actors and never be bamboozled or be wrong, but frankly, that isn't realistic. Scientists are fallible, have finite time, and are subject to social trends and pressures like the rest of us. Expecting that they always get it right is unrealistic and not good for science.
Scientists should not shut out other ideas for two decades even though there were serious challenges to the hypothesis the whole time. Part of being a good scientist is admitting your chosen hypothesis isn’t correct and that it’s time to create a new theory based on the experimental evidence available. Otherwise, it’s just a research based cult.
This almost never happens, to my knowledge. Most fields have people pursuing alternative hypotheses even when one thing is in vogue. For example, in the case of Alzheimer's Disease there were and are a lot of other threads of research and I would be that your typical AD scientist would have had a good grasp of many of them regardless of the research they were focusing on.
Yes. At least there is a plausible argument for it in many contexts. Imagine a suite of ten hypothesis, one of which is correct but each of which requires 6 effort units to prove or disprove or discover or whatever. If we only have a 10 units of effort to devote to research, we won't get anywhere if we divide them equally but we can eliminate each hypothesis if we work serially.
In reality I would expect the effect to be even more pronounced because progress on a hypothesis is almost certainly non-linearly related to the number of people working on it (with some point of diminishing return, of course). It is totally reasonable for a community to more or less work on proving or eliminating the few most reasonable hypotheses at a time rather than to spread themselves over the (frankly enormous) space of hypotheses relevant to a particular area.
I'd hardly argue that our system of allocating scientific research effort is perfect, but how could it be?
> Imagine a suite of ten hypothesis, one of which is correct but each of which requires 6 effort units to prove or disprove or discover or whatever. If we only have a 10 units of effort to devote to research, we won't get anywhere if we divide them equally but we can eliminate each hypothesis if we work serially.
A model of research without evidence demonstrating its accuracy is not convincing.
You might claim that we could never have discovered the Higgs without such coordinated efforts, and I could counter that without such coordination, we might have sooner discovered and advanced alternative paths that require considerably less funding and coordination, like wakefield accelerators.
I don't think science that advances by stepwise consensus will outperform random, independent search in general, but only in very limited scenarios.
> It is totally reasonable for a community to more or less work on proving or eliminating the few most reasonable hypotheses
Only if "the community" consists of largely independent thinkers that reach their own conclusions, and are not influenced by fads or celebrity personalities. That's the only way to actually ascertain "the most reasonable hypotheses", and unfortunately, scientists are not immune to such influences.
I think current funding mechanisms for science select for non-independent thinkers. At the beginning of the scientific era, most researchers were funded by patrons, which supported much more independent research (at least, independent of other researcher's opinions, not the patron).
If you are a physical anthropologist there are many questions you can not ask, and many findings that people do not want to hear. Tread carefully, because one wrong step can end your career.
“ thought of research less as a way to determine the truth, but just as a method to influence policy and public opinion. If we thought something was 80% likely to be true, there was pressure to "close ranks" and pretend as though it was 100% true, and to avoid publishing anything that contradicted it. ”
That seems to be the general spirit of this time. People feel the need to take sides and then make sure their side wins. Same happens in journalism. Most journalism these days seems to be about supporting a viewpoint and less about conveying neutral or complete information.
Sad that scientists also feel that they should be activists.
I'm not sure if that particular study is a good example of this as everything and its aunt was getting published about COVID back then. It is disappointing that other studies were shut down on the assumption that the mortality results from such a limited (and fatally flawed) analysis were correct, especially given previous knowledge of the side effects and dosing considerations of hydroxychloroquine.
The rational response to this point-counterpoint is to decentralise power & let people take their own risks as far as we are able for the truth is a rare and delicate flower which we cannot reliably identify. The only fair option is to let people make their own decisions.
But for some reason the level of abstract thought required to justify that is just too high and people can't overcome their instincts. It seems nearly every time there is a crisis the body politic centralises power and, usually, gets a much worse result than just letting people do what makes sense to them. Then people just assume that because they did something and the world didn't literally end in fire that they what they did helped. Then in the good times they do their level best to run down whatever reserves are available and prepare to go in to the next crisis with nothing but elbow grease and optimism.
I don’t entirely disagree, but I think you lean too far in seeing the world as entirely populated by independent individuals whose actions don’t affect others.
I am 100% fine with seat belt laws because other peoples’ fatality collisions affect me financially and in traffic jams.
Society is a complex system and I don’t think “just let everyone do whatever they want” is a great answer when many of those things impact others, directly or indirectly.
I would venture to say he doesn’t want someone dying for the usual reasons, but that isn’t the point. The point is whether or not the state should prevent someone from engaging in risky behavior. That is what the pp thinks is justified due to traffic jams and insurance premiums (etc).
It’s a stretch to directly couple seat belt wearing to one’s own value of their life.
While (afaik) seatbelt research is pretty definitive a very closely related area, car seat research, is plagued with confirmation bias, FUD, and a lack of interest in exploring alternatives (car seat makers are the ones who fund the crash tests)[0]. It’s crazy to go to hyper modern cities that don’t put children in car seats (Seoul, for one) while in the US you’d hear shrieks of horror if your kindergartener got into a car without one. So who benefits from car seat laws? It seems like primarily car seat manufacturers.
Back to seatbelts, I don’t know why someone would choose not to wear one, but if they have a reason that they value more than the safety a seatbelt provides, ultimately it affects them and their family far more than it would ever affect me. People need agency over their own lives. Not wearing a seatbelt seems like a minor inconvenience to most people, may impede others.
New Hampshire doesn’t require adults to wear seatbelts and doesn’t have as many per capita motor vehicle fatalities as neighboring states. In fact, it has the second lowest rate of deaths per 100M miles traveled in the country (obviously very different from large urban areas). So an assumption about seat belt laws equaling fewer vehicle deaths may not even be a valid assumption.
I think the generous interpretation is that the risk of being late or paying more is a reason to balance the freedom of someone to do something irrational. (I tend to disagree on this particular topic)
They're making an enlightened- altruist argument within the framework of radical libertarian values, so they need to motivate it in terms of personal benefit.
If you're talking to someone who already thinks top-down regulation can be justified by saving sufficiently many lives, you don't need an argument like this.
We've currently got the West panicking about a war in eastern Europe and ratcheting up the escalations (speaking of, I see we now have a couple of incursions into bone fide Russian territory - hope they weren't armed with NATO weapons). The odds of a global nuclear conflict are hopefully not in the double digit percentages, but I can't see why they aren't at a historic high in history.
Before that, we had mass lockdowns, economies shut down and people under something that wasn't house arrest because it was too industrialised. based mainly on evidence that turned out to be wrong and with solutions that didn't work as advertised.
Prior to that, we were seeing the start of the general economic catastrophe likely because the major economies refuse to let people secure long term access to cheap energy. There has been a big push to lock economies into inflationary currencies that are being used to prop up what appear to be failed, wealth destroying businesses. There is a pretence that companies making regular sustained losses are normal and investable.
I don't want to talk about seatbelts. I don't think they are that important. I don't care. I want people to stop this hyper-collective power accumulation that is doing a remarkable amount of damage. All of the major problems are being caused by huge, centralised moves of organised and (in 2 of 3) frightened people not thinking clearly about where priorities ought to sit. We knew all the way through that people aren't good at predicting outcomes of policy.
Just a note that mandated mask-wearing, stay-at-home orders and business closures happened in the 1918 Spanish flu in much the same way. The world didn't suddenly became more authoritarian in the last ten years.
That was also part of an iconic period that marked the lull between 2 of the bloodiest wars in human history and the partial collapse of civilisation in Europe, if not for the benevolence of the US in helping it to repair. Presaged one of the worst periods of economic collapse too.
If authoritarianism is on par with the 1920s we are in very bad shape. That was part of the incubation period for the people who ended up shovelling Jews into ovens.
You have a very odd way of framing historical events. The US govt's own website contradicts the claim that assistance with rebuilding Europe was out of "benevolence":
> The necessities of commercial growth dictated continued government support for overseas private investment. That, in turn, drove the United States to further engage with both with Latin America and the rebuilding of Europe in the 1920s.
I'm just saying; we both seem to be pointing out that current policies look a lot like the sort of thing that might have been found in the 1920s. The 1920s turned out to be setting up the 1930s, where people thought it couldn't get worse. Then the 1940s where they were proven wrong.
I don't know why you expect me to relax when you compare the situation to 1918. 1918 was when the seeds were being planted for one of the greatest episodes of suffering that the world has seen to date. The policies set in motion around then and the philosophies in vogue around then were indefensible disasters. The upside of the 1950s->2020s was that people were so shaken by the failures of 1910->1950 that they changed tack. The ones that didn't, in China and the USSR, who doubled down on authoritarianism then turned into their own disasters but at least they focused inwards. The US can consider itself lucky that it only had to deal with the great depression and an acceptable rate of needless war casualties.
> You have a very odd way of framing historical events. The US govt's own website contradicts the claim that assistance with rebuilding Europe was out of "benevolence":
That is nitpicking. Investing huge amounts of money to rebuild a shattered continent is benevolent no matter what the motivations of the US government's website happens to say. People do things for reasons. Sometimes they do things that are benevolent because it benefits them.
The world did absolutely suddenly become more authoritarian in the past ten years (the US, China and Russia all have become more authoritarian and so have many other countries). We're not back to 1940s levels of authoritarianism yet, but if that happens it will be much worse this time around thanks to technical progress.
I can appreciate the argument. The US for example has moved to illegalise abortions and be hostile towards immigrants. And to some extent there was a global move away from authoritarianism in the late 20th C which seems to be stalling.
But you can't forget that the US even in mid/late 20th and early 21st centuries underwent a host of authoritarian changes:
* Prosecuting unjustified (and in some cases unpopular) wars in Korea, Vietnam, Afghanistan and Iraq
* Aggressive support of violent coups and regime changes in South America and the Middle East, usually in favour of right-wing authoritarian strongmen
* Illegalising psychoactive substances for medical or recreational use, simultaneous with testing those drugs on unwitting US citizens
* Spying and surveillance against US journalists and activitists
* Political suppression and even persecution of left-wing ideas
* Increasing militarisation of police and use of violent means such as tear gas or rubber bullets to control protests
I won't repeat what others have said. I'll just encourage you to question your own beliefs just a little bit. Setting aside hot button topics, currency cannot be perfectly neutral. It is either inflationary or deflationary. Inflation is much much much much much preferable to deflation.
These are big topics. You're taking a very reductive, binary approach, and it's hard to even engage with.
I'd argue that none of these things are hyper-collective, though. These things are authoritarian. I'm not going to argue with you about the Covid response. But in the case of economic management, the policies you describe aren't meant to benefit the collective. They are meant to benefit a small elite.
> All of the major problems are being caused by huge, centralised moves of organised and (in 2 of 3) frightened people not thinking clearly about where priorities ought to sit.
This is not true of all major problems, most are caused by greed (corporate and individual). Of the ones caused by fear-based centralized power, that's authoritarianism, not collectivism. I can tell because the line I quoted perfectly describes what's happening in my decidedly non-collectivist state government (and many others) to threaten my well-being and potentially my life.
>The odds of a global nuclear conflict are hopefully not in the double digit percentages, but I can't see why they aren't at a historic high in history.
It's not even close. There are many precedents for nuclear powers fighting each other directly without triggering a nuclear war. For instance:
- The Korean War. It wasn't a proxy war, Western troops were directly fighting Chinese soldiers and the Soviet air force.
- The Sino-Soviet border conflict in 1969. Two nuclear powers fighting each other directly on their own soil.
- Indo-Pakistani conflicts. Both have nuclear weapons and they fight each other on their own soil.
- Indo-Chinese conflicts.
I'm not advocating for it, but I'm fairly certain that even a direct NATO intervention in Ukraine wouldn't lead to a nuclear war. Hell, Putin might even welcome it, it would give him a good excuse for the poor performance of his troops.
That power accumulation is inevitable when modern technology is used by humans, or aliens, or AI, or anything with the instinct of survival and self-replication, argues anti-technologist Ted Kaczynski. See the book Technological Slavery, free on internet archive.
>I want people to stop this hyper-collective power accumulation that is doing a remarkable amount of damage.
So why don't you get a bunch of people together with like minds that believe in the same thing to form a voting/power block of your representative views...
Oh, oops.
I'm having an extremely hard time not being snarky to the complete unawareness you have in this post. It really represents the peak ability of libertarians to ride the coattails of an already established power system while being blissfully incognizant of it.
Yes, there is also a huge mismatch between what politicans want from "science" and what "science" can provide.
Any legitimate scientific field is in a constant faction war, on various issues and making serious political decision based on that is insane. And the consequence is various highly suspect policy decision which were essentially purely based on some "scientific" subfaction gaining political power and enforcing their beliefs. The effects of these decisions are usually, likely by design, impossible to measure and subject to an enormous amount of confounders. "Success" and "failure" thus become largely up to chance and a "consensus" emerges, which is in truth largely based on coin tosses and more or less meaningless.
Yes! In the US particle physics community, they came up with a solution to this. Every ten years, the field as a whole gets together and sets an agenda for the field that self-prioritizes funding, before making a case to the national funding agencies and Congress. The idea is that if the high energy folks are computing with the neutrino folks, and things get nasty to the point where people on each side are saying stuff like "neutrinos suck, here's why you shouldn't fund them and fund us instead!"... All the suits are going to hear is "physics sucks here's a bunch of reasons we shouldn't fund them".
The process/report is called P5 (particle physics projects prioritization panel), and it's actually happening right now to form the coming decadal report.
> making serious political decision based on that is insane
Levitt, of Freakonomics fame, sometimes talks about this on his podcast. In his experience, politicians make the decision first and seek supporting scientific evidence second.
I don't think that is really relevant. You can't decentralize the decision to (e.g.) increase or decrease criminal sentence length by ten per cent, which is why we need good economic studies to estimate its effects. In general, many policy choices cannot be decentralized away. Also, even if you decentralize them, people still want to know what their choices will do. Suppose I'm considering getting a divorce and I want to know how it will affect my children. Is everyone in this position supposed to do their own literature review? Or just "use their own good sense" and guess what the effect will be?
> You can't decentralize the decision to (e.g.) increase or decrease criminal sentence length by ten per cent.
Sure we could; different laws in different locations; let people see what works. That is how the world does things now - note that different countries have different legal codes and by experiment everyone comes to a sort-of consensus on what works, with lots of variance between jurisdictians when people don't agree.
That diversity of law can be re-done at any practical scale. I quite like the idea of different cities having different laws (oh, for the world to have a few more Hong Kongs in it! And note how well single-city legal entities usually do).
I mean, we already have that. For state crimes we have state sentencing, for federal crimes we have federal sentencing.
>That diversity of law can be re-done at any practical scale.
Eh..... Only at an extreme cost. I mean in a few minutes of actually thinking about the problem I hope you could see how this would totally fall apart. For example you live in BumFkt Nebraska and sell widget X. Totally legal in BumFkt. Your neighbors in Timbuktu Nebraska order X on your website and you ship to them. Oops, you've both committed felonies in Timbuktu, and now if you ever go there you'll be arrested on site and imprisoned.
Typically every state constitution in the US holds itself as the highest authority in that state. Cities and make their own regulations, but the state is the final arbiter in setting the limits to their power to keep petty squabbling shit from damaging the citizens of said state. After that the federal government can step in, and quite often has as some particular states love to violate human rights dependant on the minority group you're in.
>You can't decentralize the decision to (e.g.) increase or decrease criminal sentence length by ten per cent
You absolutely can. It also exists right now in the form of federalism, where people can, to some limited extent, choose their own legislation and policies.
Both are poor at yielding useful results for social policy.
"Charter cities" and equivalents will give you all sorts of results that don't generalise (judging by the number of businesses that relocated their tax domicile to this underpopulated area, a 1% cut in corporation tax could lead to a 1000% rise in GDP per capita) and "polycentric law" won't really resolve any legal questions other than "in which circumstances can x persuade/force y to accept arbitrator z's decision"
Not sure why you ignore that a lot "regulation" also came from the private sector because they didn't want to deal with stuff going wrong all the time so they self-regulated. People on their own do decide that there is stuff they don't want to leave to chance.
Also, your proposition means people need to be able check out of society proper - good luck with that. Otherwise someone else will need to pick-up the tap for those "own decisions".
People often decide they don't want to leave competition to chance. Regulatory capture is frequently used to reinforce entrenched power. Most recently, Sam Altman of OpenAI fearmongering for AI regulation, since OpenAI is currently in the lead. If he agreed to shut down OpenAI for the greater good, he'd be a bit more credible.
Or I should say, the rules can only exist where natural monopolies exist, any competitive market will descend into chaos as the number of entrants in the market will increase and will quickly defect from any private rule set.
If this were really the optimal solution to all problems we would already see it not just at the level of human society, but all over nature as well. And yet we don't. Group behavior is extremely powerful. It might be the case that atomized individuals will find "truth" faster, but a collective can leverage truth much more efficiently when it is found. Not only that, but nature is replete with organisms for which group behavior is clearly an advantage.
I think its interesting that you say, ultimately, that the rational response can't work because of human nature. But if that is the case, then it can't be the rational response. The rational response would be the optimal response that takes human nature into account. I can't say I know what that is, but I doubt its anarchy.
If you really think that leting people ”take their own risks as far as we are able" is the answer then I'd love to take my own risks by ignoring all traffic laws.
And if, as you claim, "the truth is a rare and delicate flower which we cannot reliably identify" then I'll even start a giant Simpsons-style tire fire in my backyard. Clearly it's unpossible to identify the hazards of such an enterprise.
The problem with trying this for scientific investigations are that funding is the fundamental bottleneck. Labor is the other bottleneck, and it's one that is fundamentally at odds with the entire concept (as scientifically trained laborers can't work on their own ideas, but must work on the ideas they are employed to work on).
I agree as long as it doesn't endanger others, but when it does you start to have a more complicated ethical problem.
E.g. I'm generally in favor of at least a pressure to vaccinate for vaccines that are effective at reducing spread, but less so in cases of vaccines that are not sterilizing. In that case you're only taking a risk with your own health but are having a negligible effect on the overall spread of the disease.
Things also get more complicated when dealing with minors, such as when parents refuse medical treatment for minors due to religious (or "new age" or political) reasons.
Sure, single high-quality studies can be better than a large number of low-quality studies. But a single study ultimately is just that, regardless of how well-done.
There's too much that can be hidden in a single study for me to trust them completely. Even when well intended, there's all sorts of unintentional biases or design problems that lead to potential issues.
I've seen too many "well-designed" studies fail on replication for me to trust them too much. For me the gold standard will always be adversarial replication.
Even small studies in aggregate can tell us something about what moves effects around.
Appeals to single gold standard studies tend to become appeals to or assertions of authority, which is a poor way to do rigorous science.
It's nice to have a meta-analysis. I would usually prefer the meta-analysis for an effect size estimate.
But I also generally live by the saying "you never see a clear picture of Bigfoot". If a phenomenon of serious interest is to be considered proven, there should be at least one conclusive high-quality study alongside any meta-analyses.
As with many supposed dilemmas, it's nice to have both.
Replication is always very important. But for me the "gold standard study" is mechanistical, not statistical. It explains how a process putatively works, not that an effect was seen.
Yeah, well in my field, oncology, meta-analyses are somewhat irrelevant. As you can imagine, the bar for completing a phase 3 randomised trial in oncology is pretty high. Meta-analyses are mostly there for trainees to notch up a paper.
Another fine example of the author's point is the ivermectin in covid meta-analytic nonsense (which I cannot even bring myself to link), where a bunch of small rubbish trials are meta-analysed into a 'flawless' body of evidence while double blind randomised trials are impugned.
Disagree. I change practice on Monday after a single quality trial. Pick up any society guideline, only a small amount of the recommendations rely on meta-analyses. Look at immunotherapy or antibody drug conjugates, revolutionary therapies that arrived one trial at a time.
For example, the majority diagnostic testing guidelines are based on meta-analyses.
FYI since you asked to pick one, let's take a look at NCCN. Not sure where you're drawing your conclusion that most evidence is from phase III RCTs:
"[Q] Quality and quantity of evidence refers to the number and types of clinical trials relevant to a particular intervention. To determine a score, panel members may weigh the depth of the evidence, i.e., the numbers of trials that address this issue and their design. The scale used to measure quality of evidence is:
2 (Low quality): Case reports or extensive clinical experience
1 (Poor quality): Little or no evidence"
"The overall quality of the clinical data and evidence that exist within the field of cancer research is highly variable, both within and across cancer types. Large, well designed, randomized controlled trials (RCTs) may provide high-quality clinical evidence in some tumor types and clinical situations. However, much of the clinical evidence available to clinicians is primarily based on data from indirect comparisons among randomized trials, phase II or non-randomized trials, or in many cases, on limited data from multiple smaller trials, retrospective studies, or clinical observations. In some clinical situations, no meaningful clinical data exist and patient care must be based upon clinical experience alone. Thus, in the field of oncology, it becomes critical and necessary (where the evidence landscape remains sparse or suboptimal) to include input from the experience and expertise of cancer specialists and other clinical experts."
From an article:
"We identified 1124 potential systematic reviews from our survey of the 49 NCCN guidelines for the treatment of cancer by site. Five NCCN guidelines did not cite any systematic reviews."
Well you discount the most important thing in the first line. 'Cutting-edge' is a funny way of saying 'most effective', as if it were somehow irrelevant.
I didn't say most evidence is from phase III RCTs, particularly if you include everything that happens in oncology as the denominator, only that meta-analyses were not that relevant. Most of the critical patient facing interventions have the backing of good quality trials, at least where it is reasonable and possible to do a trial. Also one of your citations is seemingly casting doubt on the value of meta-analyses in oncology, so somewhat confused about your point.
That paragraph from NCCN is quite interesting. It is describing medicine in general really, and belies the fact that oncology has probably one of the strongest evidence base across all medical fields. Take for example how many stents cardiologists have inserted long after contradictory evidence was available, or how many pointless back operations have been done, or how many people have sat through fruitless psychoanalysis.
> Well you discount the most important thing in the first line. 'Cutting-edge' is a funny way of saying 'most effective', as if it were somehow irrelevant.
I'm discounting it for this discussion because your argument is:
"meta-analyses are somewhat irrelevant" and "Meta-analyses are mostly there for trainees to notch up a paper." which is completely false.
Note a single clinical trial is still only considered "good quality" while multiple trials or meta-analyses are considered "high quality".
To address this new point you raised, when something has very promising early results we start using it in treatment (e.g. 3rd gen TKIs in adjuvant NSCLC) but until this weekend we had no 5 year OS survival for adjuvant use.
It's entirely possible something one thinks is "most effective" is later proven to not be (gen 1-2 TKIs, HIPEC, etc).
> That paragraph from NCCN is quite interesting. It is describing medicine in general really, and belies the fact that oncology has probably one of the strongest evidence base across all medical fields.
> Take for example how many stents cardiologists have inserted long after contradictory evidence was available, or how many pointless back operations have been done, or how many people have sat through fruitless psychoanalysis.
I'm not sure what point you are trying to make by addressing other specialties.
The National Comprehensive Cancer Network, comprised of multidisciplinary experts from 33 of the leading cancer centers in the country, is unequivocally the authority in oncology and is incredibly well respected. I'm going to defer to their opinion on the quality of evidence available and the hierarchy of evidence.
> Also one of your citations is seemingly casting doubt on the value of meta-analyses in oncology, so somewhat confused about your point.
The JAMA article states that the methodology in many studies does not meet NCCN/PRISMA criteria which is a well known, this says nothing about the relative value of good-quality meta-analyses (which are far more common now with the PRISMA update).
I'm really not sure why you think systematic reviews are irrelevant, this is a very radical viewpoint that I've seen no evidence of. Good meta-analysis > good RCT. The reality is that good quality studies of both types are uncommon in medicine, but the goal is still to use good SRs.
I don't think it is false. I can only speak of my experience as an oncology healthcare provider. I spend many hours each week digesting the literature, and <5% of that involves meta-analyses. In the multidisciplinary meetings I chair, we rarely discuss evidence from meta-analyses, but we are always talking about clinical trials. The NCCN guidelines were useful when I was a trainee, but otherwise they are too US-centric, and they are always out of date due to the frequency they are updated. This is why ASCO keeps issuing rapid updates in breast cancer for example (https://old-prod.asco.org/practice-patients/guidelines/breas...). There are 2 such updates this year already. If the primacy of meta-analyses were so great, why would they bother to issue rapid updates of what you class as low quality evidence?
But to give a concrete example, the problem with meta-analyses is well illustrated in the recent EBCTG meta-analysis published in the Lancet, a top tier journal. This involved over 100,000 patients, and explored concurrent chemotherapy regimens in breast cancer. The problem is that such regimens are not used anymore. The authors acknowledge in their own conclusion that this massive meta-analysis contradicts their own previous meta-analysis showing the superiority of sequential therapy. What exactly does one do with this? How does this help a patient get the right therapy? The treatment of various breast cancer subtypes has also evolved so much that the trials they meta-analyse are mostly obsolete. Hence my point, that meta-analyses are just not that useful in oncology, even truly massive well conducted ones published in prestigious journals. So it is not so simple as meta-analysis > RCT, that is merely lazy dogma. I find it hard to believe that anyone actually treating cancer patients would hold this view.
Of course most meta-analyses in oncology are not 100,000 patient behemoths conducted by consortia. They are much smaller studies, which usually don't bother to get patient level data, and just copy numbers from tables in the original papers while running through the Cochrane systematic review template.
And yet, here I am dubbed 'radical' at the bottom of a comment thread on Hacker News. Unfortunately the dogma around systematic reviews and EBM has exceeded its usefulness by quite some margin. The meta-analytic method was developed by psychologists trying to compile evidence about extra sensory perception of all things - an inauspicious beginning if there ever was one for the supposed cornerstone of medicine.
Speaking of ASCO, their methodology for recommendations is conducting their own systematic review and they start with Cochrane.
>Upon approval of the Protocol, a systematic review of the medical literature is conducted. ASCO staff use the information entered into the Protocol, including the clinical questions, inclusion/exclusion criteria for qualified studies, search terms/phrases, and range of study dates, to perform the systematic review. Literature searches of selected databases, including The Cochrane Library and Medline (via PubMed) are performed.
>After the systematic review is completed, a GRADE evidence profile and summary of findings table is developed to provide the guideline panels with the information about the body of evidence, judgments about the quality of evidence, statistical results, and certainty of the evidence ratings for each pre-specified included outcome.
Rapid criteria:
The criteria for a rapid recommendation update are:
1. that the identified evidence is of high methodological quality,
2. there is high certainty among experts that results are clinically meaningful to practice,
3. the identified evidence represents a significant shift in clinical practice from a recommendation in an existing ASCO guideline (e.g., change from recommending against the use of a particular therapy to recommending the use of that therapy; or a reversal to a recommendation) such that it should not wait for a scheduled guideline update.
A systematic literature review focused on the updated recommendation will be conducted by ASCO staff. Specifically, the immediate past guideline literature search strategy will be updated and filtered by search criteria specific to evidence informing the recommendation under review. All identified evidence will be quality- appraised using the GRADE methodology as outlined in Section 10 of this ASCO Guideline Methods Manual. The procedures used to draft the rapid recommendation update and deliberations by the expert panel will follow routine methods for all guidance products as outlined in this ASCO Guideline Methods Manual.
ASCO position on meta-analyses:
All these reasons can be used to make the excuse that a systematic review and meta -analysis should not be done, especially if resources aren’t available to hand search all journals of all languages, etc.
The solution is not to avoid doing a systematic review or meta-analysis, but to reveal to the reader what short cuts were taken (e.g., we included only peer reviewed published studies, or restricted our eligibility to studies published in English). This shows transparency, and then the readers can decide how important this problem is in applying the results of your meta -analysis to their situation.
Myths about meta-analyses
• A literature-based meta-analysis is not worth doing
So systematic reviews and meta-analyses are no longer useful?
That you're arguing with anecdotes is proof itself of why EBM is important.
>There are 2 such updates this year already. If the primacy of meta-analyses were so great, why would they bother to issue rapid updates of what you class as low quality evidence?
"A targeted electronic literature search was conducted to identify any additional phase III randomized controlled trials in this patient population. No additional randomized controlled trials were identified. The original guideline Expert Panels reconvened to review evidence from EMERALD and to review and approve the revised recommendations."
Where is the meta-analysis? Where is the funnel plot? What are you even arguing about? They issued an update because of one trial.
Here is another one from June 2022, a major change to how one type of breast cancer is managed, in the methods:
"A targeted electronic literature search was conducted to identify phase III clinical trials pertaining to the recommendation on immune checkpoint inhibitors in this patient population. No additional randomized trials were identified. The original Expert Panel was reconvened to review the key evidence from KEYNOTE-522 and to review and approve the revision to the recommendation."
Where is the meta-analysis? Again, what are you trying to argue? They issued an update because of one trial.
There are two updates this year, one about HER2 testing, and one about ESR1.
Perhaps you’re unaware but not every systematic review is or can be a meta-analysis. Meta-analyses can and are also included in systematic reviews released by ASCO.
The criteria for a rapid update is listed in the comment you replied to, I have no misunderstanding of why they publish them but you are trying to misrepresent this as deviating from EBM.
Your words were: “Unfortunately the dogma around systematic reviews and EBM has exceeded its usefulness by quite some margin.”
I’m not sure what point you’re trying to make but seeing as you’re attempting to argue ASCO’s own position and methodology with anecdotes and conjecture based on one specific area of breast cancer treatment and extrapolating to the entire field of oncology I’m not sure there’s a point in engaging further.
You can refer to the full ASCO statement I linked which discusses meta-analyses for their arguments.
The president of Medicens sans frontieres, Rowan Gillies, gave a speech once. He had a drawing of a patient and the doctor inside a circle. His comment was 'Other people keep trying to climb into this circle. They can all fuck off.' Excuse his profanity, but I offer this response to you, who knows nothing about what I do, and understands nothing about quality healthcare.
Of course if you ask a clincian if they should be regulated more, they will say no. The fewer stakeholders involved in the relationship between patient and doctor, the more power the doctor has. The truth is there should be others involved, at the very least the government health agencies.
Actually doctors would be universally happy to get more help and support to deliver better healthcare. That you don't talk about that is quite telling, or is that what you mean by 'regulation'?
Casting the patient doctor relationship in terms of power dynamics is a bogus sociological construct divorced from reality. The true division of power is between those who fund healthcare and those who receive it, I would start there if you think things need to be improved.
Yes - regulation is not the same as better support for the healthcare system and I am surprised you do not get that distinction. Regulation is about putting rules into place to prevent fraud, malpractice and anything where the clinician may abuse the system to the detriment of patients or other healthcare professionals around them. In addition to regulation, the government should also support the healthcare system through better funding and (patient and clinical) education. It can be true that both more regulation and better support are needed.
To say there is no power dynamic between patient and doctor is wildly ignorant. There obviously is one, even if the patient is not paying the doctor directly for their services. Not sure when (if?) you went to medical school, but this is taught to all medical graduates. The field of medical ethics exists for a reason.
What if the patient wants other people in the circle? (I do, when I'm a patient. I've never really trusted doctors I've had due to my perception of competing motivations, and they've given me reason for that distrust more than once).
Sorry to hear you have suffered poor quality interactions with doctors. Being honest, if there is no trust in the relationship between patient and doctor, then nothing else matters much as the experience will be poor on both sides.
Patient can of course bring whoever they want into the circle. The problem is the intrusions that neither healthcare provider nor patient want.
From the point of view of the experienced patient it's not a single circle, it's a Venn diagram with the patient at the center of multiple overlapping circles.
But sure, uninvited third parties shouldn't butt their head in often, except for the occasional regulator.
I can't speak to parent's practice but the cancer centers I've worked at follow the NCCN guidelines (apart from patients enrolled in trials, although this is also a NCCN recommendation) and many cases are reviewed in multi-disciplinary conferences to homogenize practice within an institution.
Although guidelines are just that (i.e. not mandatory to adhere to) I really doubt an oncologist in US/Canada "practices as they please".
If your going to try to construct a meta-analysis of many studies, the first step is going to be reading the materials and methods sections of all the included papers, the goal being to ensure that the data is actually comparable within some (small) margin of error.
Disassembling the materials and methods section is the typically the most difficult part of reconstructing a paper's experimental approach as authors often are less than complete, or may refer to other (obscure) papers, or there may be concepts that are so well known in the field that they're taken for granted and not explicitly described.
This entails a lot of work and some people are careful and some are sloppy. This is also true for construction of large datasets from individual scientific reports that are then used to produce analytical papers (this is not a 'meta-analysis') based on that dataset, such as in genomic studies, climate studies, etc. The solution here is again to carefully curate the databases and understand their stengths and weaknesses (quality of genomic sequencing matters, for example, and satellite data is not the same as weather ballon data in terms of climate).
So if you try to do a meta-analysis of analytical studies based on databases constructed from the results of many individual scientific reports there are all kinds of pitfalls, so it's not surprising that the value of such meta-analysis can be very questionable, unless the authors are extremely careful and clear about their criteria for including or excluding papers from their study.
Read enough of them, however, and it becomes easier to identify when the authors are trying to sell their predetermined conclusion via cherry-picking studies that support their position. Unclear methods sections are a good warning sign.
There is no simple way to do this. If it were easy to identify trustworthy research we wouldn't be having this discussion. From a slightly broader point of view, I'd argue that the idea that things can be easily or even meaningfully sorted into trustworthy and non-trustworthy is epistemologically facile, at the very least.
Unfortunately there's a wide range between "obviously trustworthy" and "obviously bullshit".
Many bad studies require a lot of domain knowledge to understand why they're bad. Some controversial studies were widely discredited by the rest of the field, but actually turned out to be correct. Many studies are worth doing, but shouldn't be used to draw any meaningful conclusions (such as pilot studies). And finally, there are some studies that seem legitimate, but are actually just lying about their data.
And that's before you get into ideological debates and political problems. We like to think scientists are bastions of rationality, but they're just as susceptible to petty infighting as any other group of people.
A database of trustworthy studies would be an incredibly valuable tool. Unfortunately, it's a tool that is nigh impossible to actually create.
> Many bad studies require a lot of domain knowledge to understand why they're bad.
I went to grad school after some years practicing in the discipline. During those earlier years I neglected to introduce a Kozak sequence the first time constructing a protein expression cassette. These sequences, which are on the 5 prime region immediately adjacent to the protein open reading frame, are almost always required for decent translation of mRNA to protein in eukaryotes. And minor modifications to the Kozak sequence can have varying impact on protein expression. Since then this has always been top of my mind when doing such design.
So in grad school in a class we had various papers assigned to read and answer questions on. One peer-reviewed paper split a 5 prime untranslated region (an mRNA regulatory element) into two parts to determine which parts were most necessary for regulating expression of the protein. They placed these two parts directly adjacent to the protein open reading frame. And unfortunately, for the first half of the untranslated region, they neglected to keep the native Kozak sequence. Not unexpectedly, there was almost no protein expression from this. They concluded that the first half of the untranslated region was both necessary and sufficient for down regulation of expression.
I pointed out their fatal flaw and seriously impressed my professor who had only done a cursory reading of the paper before assigning it. I don't understand how this passed peer-review, but it did. I guess the reviewers didn't look at the mRNA sequence in detail. If they had, they would have noticed the altered Kozak site and would have required a further experiment with the native Kozak sequence.
The study was actually fine in basic design, the author just made an easy mistake, and no one caught it.
You can trust some claims from a study but not other claims. Maybe a database of trustworthy claims with the studies and commentary on why they support the claim. With error bars!
I'd love that, but determining trust is going to be very hard. For example, where would you put a study that is 100% factually accurate, but yields intentionally misleading statistics for political reasons?
1. You simply can’t do this in many fields, including wide swathes of health research. It’s a non-starter.
2a. Even if you could, see the OP’s point about research design. You need to understand the methods.
2b. Even if you could part (b), the raw doesn’t help you with publication bias. Your selection of raw data is going to be heavily biased in favor of the data for the papers which managed to be published. It’s not going to contain the data for the results which never saw the light of day due to failure to reach statistical significance.
What prevents it from becoming the norm for the "statistically insignificant" data to be published alongside the rest?
Even point 2a seems weak. That must be true of any research... someone who did not know or did not understand the methodology should fail to make sense of any data provided, and that's on them not on the research itself.
Point 1 is really the one I don't understand. Is it medical/health privacy that prevents this? Surely no one expects unanonymized data to be provided. This is about teasing out bad research, not indicting anyone for the falsified sort.
>>What prevents it from becoming the norm for the "statistically insignificant" data to be published alongside the rest?
The incentives of the researcher. You don’t want to be the guy whose lab produces ten failures and one success, even if that’s the reality of every lab.
The guy who doesn’t share the null results will look better than the one who does every time. Deans and chairs only know how to count one thing in tenure decisions - lines on your CV.
To put this another way, nothing stops this kind of sharing right now, and yet we have a well known file drawer problem.
If the proposed solution involves “oh let’s just completely change the social norms, despite the incentives of the individuals involved,” then it’s a non starter. The kind of sharing being proposed is that kind of idea.
>>Point 1 is really the one I don't understand. Is it medical/health privacy that prevents this? Surely no one expects unanonymized data to be provided. This is about teasing out bad research, not indicting anyone for the falsified sort.
What you consider “anonymized” is generally not what regulatory agencies will consider anonymized and the best sources for health data at scale are always governments. (Frequently they are the only sources.) The agencies are too risk averse about the potential harms from data releases, but that’s the reality. You can’t share it. You can’t even have it on your own laptop, encrypted or not.
Interesting data that lets you answer complicated questions cannot be made public. The more interesting and the more complicated, the harder the underlying data is to access and if you say you want to share it publicly they’ll laugh you out of the room. You’ll never make it through the already excessively burdensome IRB process.
The point I am trying to make in 2(a) (maybe inexpertly) is that all the data sharing in the world isn’t going to solve any problems when the people looking at to do their own “replications” are uninformed about methods. Data sharing on its own will not solve this.
That's irrelevant though. What can happen is that others who understand the many methodologies can apply them and either retest the same study or come to different finding with other methods and compare. At least the base data is the same.
TFA is a caution against accepting the results of a meta-analysis in a field where the typical research quality is low, because you can't realistically correct for all the underlying problems. I personally think you should be skeptical of meta-analyses in any field, because it is very, very hard to do one "correctly".
> accepting the results of a meta-analysis in a field where the typical research quality is low.
I can only speak to medicine (which is filled with low quality research) but this is a non-issue in my field. Who ignores quality in a SR?
Low quality research leads to a low certainty SR. See Cochrane for an example of how to do this correctly.
A lot of their review use low-quality studies because that’s all that exists, it’s very much disclosed and transparent.
> I personally think you should be skeptical of meta-analyses in any field, because it is very, very hard to do one "correctly".
The point of PRISMA was to make it easy. In diagnostic accuracy we also have QUADAS-2 and STARD.
It’s really not that hard, the point of EQUATOR network and researchers like P Bossuyt & co was to make things easier. Any medical SR would include a PRISMA statement these days.
One should be skeptical of any research and critically appraise it, but systematic reviews aren’t necessarily “harder” to do than primary research studies. They also carry more weight so we have to be able to trust the methodology (hence PRISMA).
Many groups do this correctly…
If you mean the old school assumption that "pooling low-quality studies in a SR automatically makes it high-quality evidence" that's not really a thing anymore.
The problem is a lot of primary research is at high-risk of bias. So coming up with a low-risk SR is hard, making the assessments is not.
I'm not going to defend the content, but I am going to defend the format.
Is the blog post overly long? Or are we just accustomed to short-and-juicy articles aimed at the declining attention spans of the twitter era?
It's about 3700 words, which is about the length of a high school term paper. Substack lists it as a "17 minute read". Firefox says 21-24.
Reality has a lot of detail and for something that actually aims to get technical rather than just skimming the surface, it often takes several paragraphs to analyze different facets of a complex question.
In a different era, this article would have been considered an appropriate length for a magazine.
I think intelligent people refusing to acknowledged that science is not a field void of people with imperfect ethics, bias, financial gain incentive or a need to justify their position has immunized me from blindly trusting science. Many years ago a research professor at a highly regarded university basically described to me how funding worked at their university. Essentially the most important criteria was if their research had some promise of results aligning with who was giving the university funding. The second was how well you could navigate the politics of the university and the research funds director. This person expressed deep dissatisfaction that they had promising research in the cancer field that they could not get the lab director to fund because of the first two points.
I mean, I've seen up close talks of how to "adjust the dataset" to fit a certain, pre-ordained result for a medical research paper on children. Wouldn't discard it happens in other fields.
But there is freedom in religion in most countries, one can choose what to believe in.
I’ve been reading into autism lately; and a brief thought about “intelligent people” has popped into my head.
Intelligence is just a tool, that’s used to achieve the underlying organism’s goals. For example, a crow’s intelligence is used to keep out of danger and to find food. Most neurotypical people use intelligence to gain resources like money, status, and power to fulfill biological imperatives like being part of and respected by their “tribe.” Whereas some autistics will not care about this sort of thing, and be biased more towards the truth, rather than what’s socially most advantageous. For example: Stallman. He has little concern for social appearances and such, so he is free to express something close to “truth,” rather than socially-rewarding “truth.”
At that point, it becomes fairly depressing how little research and “knowledge” attainment is actually geared towards “truth,” and not personal enrichment.
—
And a tangent. I think what separates the outcast autist and the socially well-regarded autist is confidence and self-esteem. If you act and present yourself like a low-status person, you will be treated like it, as people regard your “offness” and associate it with low-status. Whereas if you’re self-assured and act like what you’re doing is innately correct, people will pick up the body language and signs, to think that you’re actually “right” and should accept it.
Sort of like those methed-up cult leaders who are utterly insane, but project unwavering self-assuredness, which people naturally gravitate to.
When I was in academia, it was increasingly the case that my peers thought of research less as a way to determine the truth, but just as a method to influence policy and public opinion. If we thought something was 80% likely to be true, there was pressure to "close ranks" and pretend as though it was 100% true, and to avoid publishing anything that contradicted it. It is also well known that papers that support certain "sides" tend to be easier to publish (and in higher ranked journals), plus can yield more media attention. See for example, this fraud in sociology - https://en.wikipedia.org/wiki/When_contact_changes_minds.
This may be better in the natural sciences, but in social science you should not trust any paper unless you read through and fully understand the methodology. Any non experimental results has so much wiggle room in the modeling methodology that it's easy to generate any result you want. The actual percentage of papers with credible results is very low, much lower than laypeople think.