Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Based on your comment, the effect could be larger as well as smaller.

All research is met on HN by people who know better and will tell you why it's flawed. There isn't a greater collection of expertise in the history of the world than on HN.

Edit: I meant to add: What value can we find in this research? It wasn't published as scripture, the perfect answer to all our problems. It's one study of some interesting events and data; what can we get out of it?



> Based on your comment, the effect could be larger as well as smaller.

The reality of any underpowered study could always be “larger as well as smaller”. This statement doesn’t add anything to the conversation.

The mistake is pivoting around poorly structured and underpowered research.

> All research is met on HN by people who know better and will tell you why it's flawed.

This is a misunderstanding. People who know how to read studies will always be aware of the limitations.

There’s a difference between saying “everything is flawed” and pointing out the limitations. Most early research comes with significant limitations like small sample sizes or large cofounders. You have to understand these in conjunction with the results to know how to interpret it.

There’s a cynical approach where people see discussion of limitations, don’t understand it, and instead go into a mode where they think it’s smarter to ignore all criticisms equally because every paper attracts criticisms.

This is just lazy cynicism, though. There are different degrees of criticisms and you have to be able to see the difference between something like a slightly underpowered study, and something like this paper where the authors threw a lot of regressions at a lot of numbers and kind of sort of claimed to have found a trend.


In this case, it only takes a few seconds to find multiple studies confirming the effect.

For example

https://onlinelibrary.wiley.com/doi/10.1111/ina.12042

https://onlinelibrary.wiley.com/doi/10.1111/j.1600-0668.2010...


[flagged]


99% of times it's smaller, saying there is an equal chance it's smaller or bigger is also false. It could be, probability is strong that it won't.


Conditional on “the study being published and getting attention” the real effect is likely smaller and not larger.

Eg if you assume there is a real effect plus a lot of noise, given the study has been published etc the noise will have more likely acted in the favourable direction.

IMHO given the relatively large size of the effect it seems quite likely that the noise part is in fact potentially large (this is much more subjective) which makes is less clear that there is measurable signal at all here. I’d have to see a lot of replication or a very strong explanation of the underlying mechanism to believe the magnitude of the effect, but will very easily believe the sign (with a small magnitude).


> All research is met on HN by people who know better and will tell you why it's flawed

It is almost certainly flawed, and it is probably wrong: https://journals.plos.org/plosmedicine/article?id=10.1371/jo...

If you are discussing research at all it is important to discuss the flaws too. The alternative I can see would be to take every published paper as proven true even though we know this is not the case.


Science succeeds when people lean towards the side of cynicism instead of optimism. Scientific research should be read critically.


Critical thinking and skepticism are good, but much of what happens on HN is not that.

Thinking critically includes, most of all, finding value - you need to think critically (and skeptically) to avoid assigning value to things that don't have it, but you must find value. The goal is to build knowledge - just like the study author needs to find knowledge among flawed data, you must find knowledge among flawed studies - and they are all flawed, of course.

Focusing on the flaws and trying to shoot down everything is just craven recreation.


Science is a long game, it's not about sales where you need to sell right now. Extreme results will be attempted to be replicated, which in turn costs a lot of funding. That is money and time, sometimes a whole persons career.

This money and time is taken directly away from funding other, potentially more worthy or more likely to be correct studies.

There is no point of looking at every (flawed) study in the most positive way, unless you have unlimited time and money to pursue every avenue of research.

Often (not always), the studies that are most heavily promoted among the news and in business or politics are really not the best research and other, less visible but more solid research gets ignored in favor of whats popular or what has had good marketing.

This is very frustrating for people doing solid good research, because every so often someone else will come along with wild, exaggerated claims and very little data to back it up, and then gets funding for it.

It takes literal years away from good science just because someone markets and speaks well.

Which is fine in business, but in science this is not something "the market" can or will correct for well, simply because the timespans are so long.


> There is no point of looking at every (flawed) study in the most positive way

This line epitomizes the nonsense in the discussion. I didn't say every study, you can't know it's flawed without seriously examining it, and I didn't say in the most positive way at all.

By using these exaggerations, you damage any serious discussion - you give people nothing to respond to except your emotional state.

What I said was, the point is to build knowledge, and so the way to examine research is to find the valuable knowledge - which includes evaluating the accuracy, etc. of that knowledge. There's no other point to it - we're not awarding tenure here, so there's point in keeping some overall score. We just want to learn what we can.


I did not say this study is flawed or that every study is flawed. And I have made no exaggerations or said that you personally look at it the most positive way.

Reading comprehension is important, and especially important in a discussion like this.

I do however really mean that some studies are not worth looking at all in more detail: if the methodology is flawed, the results are meaningless. At most the premise of such a hypothetical (not saying this one necessarily!) study could be used as an idea for further research, but not to build knowledge on or derive knowledge from the results.


Are there some examples of "non flawed" research that is getting ignored? Because (as a non-academic) I feel like I'm seeing the same HN attitude that OP describes. No study is good enough for HN. There are always nit pickers that come out of the woodwork. For every science article about some study or finding, the top comment is always a variation on: "This study is flawed because..." Almost without exception. Also, the standard is so high: A single flaw found is grounds for dismissing the whole study as flawed.

My guess is if you raise examples of "good science" the HN peanut gallery will jump in to point out the flaws in that science, too.


> you need to think critically (and skeptically) to avoid assigning value to things that don't have it, but you must find value.

This isn’t critical thinking.

This is toxic positivity.

It’s okay to admit that some studies don’t have value to add. If you don’t accept this, you’re going to be tricked by a lot of people trying to get your attention with bad data.

Being able (and willing!) to filter out bad sources, even when they say something you want to hear, is a critically important skill. If you force yourself and others to find something positive about everything then you’re a dream come true to purveyors of low quality or even deliberate misinfo.


lol

> some studies

It's almost every study on HN, not some studies, which you'd understand if you read my comment.


Yes because most studies that end on up HN are there because they were reported on somewhere as news.

This usually happens in usually these cases:

1. when a paper is extremely good and it's results are groundbreaking, or

2. when a study itself claims it has groundbreaking results, or

3. when it's a regular study that's gotten some great marketing/promotion e.g. by their university.

The case of 1. is extremely rare, and even when everyone believed the results and they were peer reviewed by a reputable paper like Science, some of them turned out to be academic fraud that was later retracted.

Most studies that pop up on HN are of types 2. and 3. That's just because otherwise they would not get news attention.

But most studies in general are in category 4: the ones an academic or professional would read going about their daily business / research. These range from terrible, to OK, to really great, but 99% never make the news.

As a (former) academic, I've read lots of papers and like in real life it's usually the people (papers) that get attention who scream the loudest. There are some gems too of course, and it's right to not ignore anything.

But in my personal experience and over time, I've been very right to be very sceptical once a result turns up in the news because of the 3 ways it can get there.

This is amplified even more so with papers that base their results / outcome purely on statistics, such as most experimental studies done. These derive their results from the statistics (sample size, experiment design, etc) so their power and the probability of their result being correct (what the authors say) it directly coupled.


Focusing on the flaws is vital context for helping ordinary people understand the world. Any given study with a surprising result is likely wrong. Yeah, some of them are going to be right, but you're going to get many false positives and only a handful of studies that replicate to a convincing degree.

I've made this mistake time and time again, most recently with vitamin D association studies, and I'm grateful to all the people who urged everyone else to take a wait-and-see approach.


> Focusing on the flaws and trying to shoot down everything is just craven recreation.

No, its a valuable job to find flaws because its much easier to fix and work on known flaws than to stumble in the dark.

Removing flaws and problems is one of the easiest ways to add value.


It's not valuable. People who do this at work are people who have no value to offer so they try to sound smart (and valuable) by finding flaws in someone else's work. All work is limited and flawed - it's easy to find them. Add the common hyperbolic statements on HN dismissing the entire study or whole fields of research, and it's misinformation.

The real significance is that things like sample size, to pick a common example here, is easy to understand in a theoretical way and so people apply it to the actual (not theoretical) practice of real research, which they don't understand the practicalities of, and also they overemphasize it because that's pretty much all they understand.

The first thing they look at in a paper is sample size - and hey, now sometimes they have something to 'contribute'! It's just reinforcing the same misunderstandings in others.

It sucks, a little, to have nothing to contribute, but it's a great opportunity to learn from people who do know.


> All research is met on HN by people who know better and will tell you why it's flawed. There isn't a greater collection of expertise in the history of the world than on HN.

It seemed like a pretty valid criticism. These studies should be taken with a massive pinch of salt because they're fairly uncontrolled.


Installing air filters is an intervention that has a cost and thus needs to be verified and proven. You don't roll such a thing out on a broad scale based on "the effect could be large".


I find it bonkers we have better regulation around growing battery chickens than growing kids.

You don’t need massive study to find out that kids don’t like suffocating in classrooms.

It’s a bit like mandating reversing cameras on cars. Study says economically they do not make sense, but not squishing your kids trumps that.


Suffocating would be CO2 rather than particles?


Things which reach the level of getting on here are basically always outliers. And is the outlier real or a false positive? There's a huge selection bias towards the false positives.


> Based on your comment, the effect could be larger as well as smaller.

Yes, but since we know that there's a huge bias to publish and publicise larger results, you know what way to bet.



Yes most responses on HN appear to fall into 2 broad categories:-

1. Why this blog post study is flawed and dumb

2. America is bad and shameful.


That's an interesting comparison, how people analyze and evaluate research and how they do the same with public affairs.

I would have said that the point of research is to find the value and build knowledge, while the point of discussing public affairs is to identify problems to fix.

Thinking about it, I'm not sure the latter can't find things that are constructive. But in either field, the exaggerated, dismissive comments/rants are not just a waste but damaging to progress.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: