Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Peer review is a joke. Peer reviewers don’t look at your data and the programs you used to analyze it. They don’t look at your experimental apparatus, they don’t repeat your experiment, they don’t talk to the subjects you interviewed, at best they can spot obvious “red flags”.

(Of course, in 2023 you should be able to publish your data and all your software with the paper.)



Some of the things that peer reviewers do, in my experience, in biology:

- question whether or not the conclusions you are making are supported by the data you are presenting

- ask for additional experiments

- evaluate whether or not your research is sufficiently novel and properly contextualized

- spotting obvious red flags - you seem to discount this, but it's quite valuable

In my experience, the process of peer review has been onerous, sometimes taking years of work and many experiments, and has by and large led to a better end-product. There are not so great aspects of peer review, but it's definitely not a joke as you characterize it.

I'll add that in biology and adjacent fields, it makes no sense to discount peer review because the reviewers do not repeat your experiment - doing so is simply not practical, and you don't have to stretch your imagination very far to understand why.


I also work in biological sciences research, but I'm more skeptical of peer review than you appear to be. My main criticism is that peer review is an n=2 process. Why not publish an unreviewed pre-print in bioRxiv and explicitly solicit constructive, public feedback directly on the pre-print on bioRxiv? I envision something similar to GitHub where users can open issues and have nuanced discussions about the work. The authors can address these issues by replying to users and updating the data and/or manuscript while bioRxiv logs the change history. Journals can then select sufficiently mature manuscripts on bioRxiv and invite the authors to publish.

This would massively increase the number of people that review a manuscript while also shortening the feedback cycle. The papers I've published have typically been in the peer review process for months to years with just a handful of feedback cycles of sometimes dubious utility. This can be improved!

Edit: I forgot to mention the issue of politics in peer review! If you're in a relatively small field, most of the big researchers all know each other, so peer review isn't truly blinded in practice. Junior researchers are also pressured into acquiescing to the peer reviewers rather than having an actual scientific debate (speaking from experience).


As it happens, I'm building "Github for Journals".

I pivoted away from attempting a crowd source review approach with a reputation system to trying to support journals in going Diamond Open Access.

But the platform I've built supports co-author collaboration, preprints and preprint review, journal publishing flows, and post publication review - all in a continuous flow that utilizes an interface drawing from Github PRs and Google Docs.

You can submit a paper, collect feedback from co-authors, then submit it as a preprint and collect preprint feedback, then submit to a journal and run the journal review process, then collect feedback on the final published paper. And you can manage multiple versions of the paper, collecting review rounds on each version, through that whole process.

It's in alpha, I'm pushing really hard with a short runway to get the journal flows to usable beta while trying to raise seed funding... the catch being I feel very strongly that it needs to be non-profit, so seed funding here is grants and donations.

I'm looking for journal editors who want to participate in UX research. I'm also interested in talking to folks who run preprint servers to see if they'd have any interest in using the platform. If you (being any reader) know any, or have leads for funding, reach out: [email protected]


When you say "submit to a journal" does that mean you are not a journal? Why operate as a preprint server, but not offer to publish with peer-review? (Perhaps I'm misinterpreting your comment).


It doesn't sound like that poster operates as a journal, and that makes sense. Academic researchers need to publish papers in long-standing and highly respected journals in order to be promoted and eventually gain tenure. Journals do not add value by simply providing space for researchers to publish their work—they add value by existing as a reputable brand that can endow select researchers with academic and social credit.


As mentioned in my other comment, crappy peer-review is a big problem for most journals, so a solution to that needs to be found.


Yeah, before I pivoted to trying to flip journals, I spent a year exploring crowd sourcing with an eye on improving peer review. After building a beta and collecting a bunch of user feedback, my conclusion is that academics on the whole aren't ready to crowd source. Journal editors are still necessary facilitators and community organizers. So that lead to exploring flips.

However, I think there's a lot that software can do to nudge towards better peer review. And once we have journals using a platform we can build lots of experimental features and make them easy to use and adopt to work towards improving it.

I've kept crowd sourcing preprint review in the platform - though I removed the reputation system since UX research suggested it was an active deterrent to people using the platform - to enable continued experimentation with it. And the platform makes it easy for preprint review to flow naturally into journal review and for the two to live comfortably alongside each other. The idea being that this should help enable people to experiment with preprint review with out having to take a risk by giving up journal publishing.

And the platform has crowdsourced post-publication review as well.

My thought is that if we can get the journals using the platform, that will get authors and reviewers in the platform and since preprint and post-publish review are really easy to do in the platform that will drastically increase the usage of both forms of review. Then folks can do metascience on all of the above and compare the three forms to see which is most effective. Hopefully that can then spur movement to better review.

I also want to do work to ensure all the artifacts (data, supplementary material, etc) of the paper live alongside it and are easily accessed during review. And work to better encourage, rewards, and recognize replications. I think there's a lot we can explore once we have a large portion of the scholarly community using a single platform.

The trick is getting there.


The platform is intended to host many journals in the same way Github hosts many open source projects. And to facilitate interactions, conversation, and collaboration among authors, editors, and reviewers across them.


I think the key is that peer review is a promise of an n=2 process.

There's no promise that an unreviewed pre-print is going to get two constructive readers. It's also wildly subject to bias - being on a pre-print with a junior, female researcher was eye opening as to the merits of double blind review.


You could blind the pre-print process, too?


I've not seen a major attempt to blind pre-prints, and given you have to remove some identifying information for blinding, I think that would be a tall order.


Why would that be a tall-order? Seems fairly simple and straight-forward, doesn't it?

You'd set up a server where people have accounts, but publishing pre-prints is anonymous by default, and identities can be revealed later.

In the current peer review system, people already have to produce papers with those identifiers removed. They can do exactly the same in the pre-print world, can't they?


A great many papers in my field contain contextual details about the settings the studies were conducted in that would effectively deblind them.

That sort of betrays the idea of a pre-print, in my opinion, because they should not depend on "Someday we'll come back and fix this".


How does conventional peer review work for those papers?


> Junior researchers are also pressured into acquiescing to the peer reviewers rather than having an actual scientific debate

Yes. When I was teaching at the graduate school level, doctoral students sometimes came to me for advice about how they should respond to peer reviewer comments. Those comments were usually constructive and worthwhile, but sometimes they seemed to indicate either a misunderstanding or an ideological bias on the part of the reviewer. (This was in the social sciences, where ideology comes with the territory.) But even in those latter cases, the junior researchers just wanted to know how they could best placate the reviewer and get their paper published. None had the nerve, time, or desire for an actual scholarly debate.


As both a grad student and a postdoc I wrote appeals to rejections for peer review that succeeded.


Yes, you can certainly do that, but I wonder how long the appeal and approval process took? I'd bet it's measured in months.


It was considerably faster than a wholesale resubmission to a new journal, and landed the paper in a better home than it would otherwise have found.


Exactly. The quality of peer review is generally pretty poor. There are a lot of really terrible studies and reviews being published in high quality journals from people like the Mayo clinic, that you have to wonder how they passed peer review.

And then on the other hand, if you ever actually have to submit a paper to peer review, you'll see how clueless a lot of the reviewers actually are. About half do give useful critiques and comments, but the other half seem to have weird beliefs about the subject in question, and they pan your paper due to you not sharing said weird beliefs.


I agree with your suggestion and would 100% welcome that process - though I don't think they're necessarily mutually exclusive. As I see it, the main difference between the status quo and the more open process you suggest is that in theory reviewers that are hand-picked by the editor are more likely to have directly relevant experience, ideally translating to a better, and potentially more efficient review. Of course, that also comes with the drawbacks that you mentioned - that the reviewers are easily de-anonymized, and that they may be biased against your research since they're essentially competitors -- I've had the good fortune of not being negatively affected by this, but I have many colleagues who have not been so lucky.

Edit: Also, to comment more on my own experience, I was lucky to be working in a well-established lab with a PI whose name carried a lot of weight and who had a lot of experience getting papers through the review process. We also had the resources to address requests that might've been too much for a less well-funded lab. I'm aware that this colours my views and didn't mean to suggest that peer review, or the publication process, are perfect. The main reason I wanted to provide my perspective is that I feel that on HN there's often an undercurrent of criticism that is levied against the state of scientific research that isn't entirely fair in ways that may not be obvious to readers that haven't experienced it first-hand.


It still is a quite useful filter, as without it most fields would be even more overwhelmed. As a reviewer, have you seen what garbage gets submitted sometimes? There are incentives to attempt to get garbage published, so throwing out a significant part of submissions does add quite a lot of value to readers, so that they get at a somewhat curated list of papers from that journal or conference.

And while all you say is true, it's probably the most we can get for free in a reasonable amount of time; requiring an independent lab to repeat an experiment would generally be far more delay and cost than we'd accept, other researchers do generally want to see the outcome as soon as the first experiment is documented; and there are people doing great research which won't bother to submit if they'd have to pay for the replication - it's generally the bad research that has motivation to spend more money for a publication. The general public might want to wait for extra confirmation, but they're not the target audience of research papers, those are intended as communication by researchers for researchers. And also quite a few media outlets would disagree and probably prefer grabbing up hot rumors even earlier, even if they turn out to be false afterwards.


All of what you wrote is true too, but it’s also the hollowed out support beam at bottom of “evidence-based everything” culture, which has taken over almost everything.

The truth is that good science is slow and that most “evidence-based” practices are referring to a huge, nebulous cloud of bad results and weak suggestions rather than the evidence that supposedly gives them authority over traditional or intuitive practices.

Scientists participate on “Little Science” and the responsible ones often maintain the perspective that you’re describing here.

But modern society has built itself around the institution of “Big Science” which is structurally forced to assert truths before they can responsibly be defended.

It’s way bigger than the general public being curious or the media wanting to get eyeballs — it’s everything going on in government, economics, medicine, psychology, agriculture, etc etc etc

It’s a house of cards and you’ve just summarized what the core problem is.


> Peer review is a joke. Peer reviewers don’t look at your data and the programs you used to analyze it. They don’t look at your experimental apparatus, they don’t repeat your experiment, they don’t talk to the subjects you interviewed, at best they can spot obvious “red flags”.

if those were the worst problems with peer review, we'd be in a much better place. Your peer reviewers are frequently higher status scientists working (competing) in the same research area you are trying to publish in. Generally, they do not want their own work outshined or overthrown.


Reminds me of code reviews, where sometimes a reviewer will go on a deep dive but usually they just scan it for obvious issues and typos. The thing is, even if my code is only getting a cursory review, I still prefer to have multiple people review it to increase the chances that obvious issues are caught. At least if it's important code.


I partially agree, and I can enumerate other issues with peer review that you have not listed, but it is worthwhile to point out some of the positive features of the peer review concept:

- Peer review in reputable non-profit journals actually provides constructive suggestions that make papers and research itself better. APS's PRX and PRL, as well as Quantum are journals where I have seen these idealistic positive effects;

- Filtering out the obvious red flags is pretty valuable even if boring;

- Thanks to people who care about the "ideal" of peer review we now have the infrastructure necessary to make reproducability much easier: mandatory data and code sharing on archival services, open (even crowdsourced) peer review, immediate feedback, etc.


I wouldn’t say it’s a joke, rather it’s not perfect.

When papers are reviewed, there are going to be a finite number of spots in the journal or conference to be assigned competitively. In good places, the reviewers catch issues in the papers and it won’t be easy to pass them.

Without peer review, a PhD student requesting graduation or a candidate applying for a faculty position would claim they have done major work, and there is no way to filter out the noise.


Fraud is considered rare, and trust is fundamental. In which case, you choose to believe what they said they did and interrogate of what they said they did is reasonable. Nobody has the budget, time, and sometimes magical fingers required to reproduce every submission.

You can disagree with this approach, but then there needs to be huge budgets set aside for reproduction.


> Fraud is considered rare, and trust is fundamental.

This is a nice sentiment but demonstrably false.

Fraud is common in academia and everyone knows it. A large part of academic science is a grift for funding. Is not "Trust" that is fundamental, is tit-for-tat.


Fraud is considered rare, but maybe not actually that rare; hence the replication crisis.


A lot of times it is not deliberate fraud just incompetence. There is the strange fact that the answer to precision QED calculations alway seemed to change when experimental results changed. One enduring lesson from a physics PhD is that a 50 page long calculation without unit tests is… wrong.


Misrepresentation of data and selective reporting to fit particular agendas of the last author are quite common. I have been involved in a couple of projects where I was asked to misrepresent or misreport findings.

Sadly, integrity offices will rarely conduct serious investigations, and won't conclude misconduct happened unless what was done was incredibly harmful. Professors are often too big to fail, they attract tons of grants and are politically entrenched.


Can Journals adopt a pull request review like process on some central server? I am imagining Github PR review like capability on arxiv where anyone or authorized people can review the submission and submitters can respond to comments, all publicly.

I don't if this is how it's done already. I have seen people complaining about peer review here and was wondering why there isn't a solution to that while software already enjoys a well established peer review system.


Idk about you guys but the only reason I do peer review is to reject competitors and enemies.

If I really hate them, I give them a "Major Revision" aka a laundry list of expensive follow-up experiments and then reject them on the 2nd or 3rd round after a few months.

There's actually zero benefit to earnestly engaging in peer review.


It sounds like you would do well in many other businesses. Don't let academia hinder your potential. Have you considered selling timeshares to elderly people?


You are an exemplar of all that is wrong in academia, but I upvoted you because there are so many like you.

(I know it from personal experience).

Personally, I decided to leave and make a more honest living. It seems you chose not to.


If you make your code available, I'm going to make sure it runs and does what you say.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: