Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In order of [already happening] -> [inevitable] -> [might happen]:

- Many PIs are writing grant proposals with the help of AI

- Most grants are written by AI

- Grants are reviewed by AI

- Adversarial attacks on grant review AI

- Arms race between writing and reviewing AI

- Realization that none of this is science

Where it goes from there is anyone's guess: it could be the collapse of publicly funded science, an evolution toward a increasingly elitist requirements (which could lead to the former), or maybe some creative streamlining of the whole grant process. But without intervention it seems like we're liable to end up in a situation worse than we started in.



The current (I mean, pre-AI) grant writing process is already not science, and it's mostly a huge waste of time. I find it difficult to imagine a scenario where it's replaced with something worse. In fact, just giving everyone a base funding and then opting to more by CV without evaluating any project at all would be immensely better. And I say this as a scientist that has been quite successful with grant requests, and also evaluates plenty, so it's not at all the case that I have been disadvantaged by the current system.


This. Instead of using your expertise doing science, we are spending huge amounts of time begging for money and writing grants that tries to hide the real complexities from reviewers who are mostly not experts in the precise area and are not equipped to understand a plainly truthful presentatio....and so we write grants that don't exactly lie, but surely do ommit complexities that might lead non-expert reviewers down a false path, and trust that the one or two people on the review who know enough to recognize the omission will also and understand the reason for the omission (not a true weakness scientifically, just in terms of grantsmanship).

See how much of a waste of time it is?


Exactly. That, plus we have to pretend that we have a clear planning for several years and make a Gantt chart of what we will be researching each year. Which I guess in some areas with very structured processes or long studies (medicine with its clinical studies where you follow patients for years, I guess) may make sense, but in mine (CS) is impossible because each research step is rather short and subsequent steps depend on the result of previous steps. So we write pipe dreams about discovering an algorithm using technique X to solve problem A on year 1, then finding a faster version on year 2, extending it to a larger coverage on year 3, etc.; when in reality technique X fails (which is legitimate, if it were obvious that it would succeed it wouldn't be cutting-edge research) and you end up using technique Y, and maybe solving problem A' instead. Which of course also has to be justified in reports about how everyting went different from planned but results are still awesome, again taking a huge amount of time.

LLMs are a godsend for writing these formulaic things and since the starting point is a situation with useless processes where everyone wastes inordinate amounts of time, I can't imagine them being harmful overall for the grant process. The bar is just very low.


Requiring a formal research plan can still serve as a filter for researchers who are not serious or disciplined, even if they don't stick to it exactly.


I personally don't agree. Anyone can make a Gantt chart, it's basically busywork. If it's required, every applicant will include it, serious or not. In fact is the kind of thing you could even outsource to a consultant (if you had access to one) and now, of course, to an LLM. Where seriousness matters is in actual execution. Which takes me back to what I previously said: we would probably be better off evaluating just the CV (which has signal about past execution) than evaluating project proposals.


This is perhaps a good argument for AI-enhanced reviews.

in my experience since 1982, reviewers will pick you to death. Peers know the real complexity. Sure, you will occasionally get reviewers who completely miss the point, but it is usually more common that your negative reviewers know way more than you do or they have a different ax to grind then the one you want to grind.


I'm a fan of thinking about using some kind of block grant approach.


Seriously, in the EU only something like 10% of the money (citation needed) actually makes it to the researchers. A lottery or even a giant pinata would be more efficient. And that's not even accounting the wasted researcher hours.


That's incredibly low - surely anyone involved at the higher end of that pipeline would come down hard on such trash statistics.

Is this actually citable?


Why do you say that writing grant applications is a waste of time? For me it is a time to dive deeply into the state-of-the-art and figure out better ways to address important questions. I do not see writing grants as a trivial core, but as a chance to reformulate my science for the better.


For me it's 20% the deep dive you mention (which I would do anyway if I didn't have to write grants) and 80% figuring out how to pitch it, optimizing the text, cutting/expanding sections, and writing useless formulaic sections (Gantt chart, data management plan, gender dimension section, and so on). So mostly a waste of time, but I guess YMMV.

Fortunately the interesting part is where LLMs don't help (at least for now) and the pointless parts are where they help immensely.


Looking at one of the big players, the NIH: They already placed a new limit of six grant proposals per PI per year, but that's pretty high. Certainly high enough for reviewers to be totally swamped if even 5% of labs who would have otherwise submitted a single proposal use AI to play the numbers game and submit the max of six.

If the NIH responds by globally lowering the limit to two or three proposals per year, they hurt 1%er mega labs that expect to have several active grants and now need to bat web above .500 to stay afloat. So I think it's likely that we see elitist criteria as you said, maybe a sliding scale for the proposal limit where labs that currently draw large amounts of funding are allowed to submit more proposals than smaller labs.

One place this may end up is with grant proposals requiring a live presentation component. You can use AI to crank out six proposals in a day, but rehearsing and practicing six presentations will still take quite some time and effort.


If it makes you feel better, I've noticed more skepticism about AI from scientists, when compared to engineers or business people.

Also, for a very high stakes proposal, I doubt people are just going to ask ChatGPT to do it, which would basically guarantee that their proposal is indistinguishable from some equally lazy competitors in their field.


I think there is plenty of AI skepticism among real engineers

What AI is revealing imo is that "Software Engineers" really, really do not deserve that title


This seems to be a general problem of all open submissions in the age of AI.

Job applications, story pitches, now grant applications, everyone is overwhelmed.


Thinking about Hollywood here since they faced the massive imbalance between relatively few people making movies and a near infinite number of people submitting pitches and screenplays early on in their history: The solution is gatekeepers, personal networks and flat out rejecting anything submitted by an outsider.


An interesting angle in the report above is that the organization pro-actively approached researchers identified by their AI.

This is quite different from screening the numerous candidates who present themselves. Perhaps more similar to "talent scouts"?


A modern take on it for sure. Also a bit like a popular author being approached to adapt their works as well.


Right, friction is required even if it’s artificial. Which was not the future we were promised but it’s the only way that seems viable.

The Hollywood system has serious flaws but at least it’s manageable.

Bringing back in-person pitches, applications and presentations would go a long way though.


> the collapse of publicly funded science

This is preposterous. You are completely forgetting that there are actual groups of humans, people who know and respect each other, coming together and discussing the proposals.

Being involved in ranking grant applications is already a thankless job, and many scientists still do it, for the greater good if you want. They will therefore also critically look at any AI reviews, and eliminate them as soon as they are not convincing.

(I am, of course, not saying that the grant review process is without flaws - that is a different discussion.)


> actual groups of humans, people who know and respect each other, coming together and discussing the proposals.

Yes, and in this case, they've complained that they have more papers than they can possibly pay attention to, and thus are wholesale filtering out large swaths of papers using a computer program that cannot be debugged or tested in any reliable fashion.

> They will therefore also critically look at any AI reviews, and eliminate them as soon as they are not convincing.

The article implies the opposite is happening. Which means to review the AIs filtering decisions they have to go through the removed proposals and not the remaining ones. Which puts them back at square one as far as the (work : human) ratio is concerned.


Exactly. I didn't say that the humans would disappear, just that AI might get much more involved in the writing and review process.

And no one said that the "actual groups ... who know and respect each other..." are going to ask for this. They aren't going anywhere (I hope), but to many outsiders they smell like another elitist clique. For better or worse playing the outsider in the scientific establishment has some political appeal, so I can see how an additional "LLM review" might gain some traction.


Given humans are subjective and also can make mistakes (see https://en.wikipedia.org/wiki/Grievance_studies_affair for one example), what makes the status quo “science” any more than this? It feels like we are criticize the flaws of the AI based systems but not recognizing the flaws of the older system.


Humans do not make binary decisions. They're capable of realizing that their total confidence in a decision is low and thus use alternate strategies in that case.

"AI" on the other hand makes binary decisions and completely hides it's internal confidence rating.


Not my experience with AI at all. In what way do you mean AI makes binary decisions?

I often ask AI systems for pros and cons and thy do as good a job as I would in many situations in which I am knowledgeable.


If you have a list of 1 million grant applications then simply asking the AI for a list of "pros and cons" on all of them is just going to drastically multiply the amount of work you're going to have to do.

So I think it's clear from that and the context of the article that is absolutely _not_ how it's being used here.


You are right given the context of the Science target article, which provides an example of proactive uses if AI to suggest innovative work.


AIs don't actually use any of the generated "reasoning" they can spit out.

If you find the word soup has spelled out something helpful, fine, but it has nothing to do with anything.


I can't imagine that a grant application written only by AI would pass even the first glance of a reviewer.

Even where AI is widely lauded (such as in programming), it needs a lot of "hand holding".

The biggest risk is that an even greater amount of time would be wasted by those who would have to screen grant applications.


- Adversarial attacks on grant review AI

- Arms race between writing and reviewing AI

As if there weren't numerous grant requests for dead end research before LLMs. Not saying this to discredit past research but when AI is used on both sides this changes none of the fundamental issues or incentives.


Lowering costs without lowering payments changes incentives.

Using AI on both sides likely results in lower-risk lower-reward science which provides society fewer benefits per dollar spent.


It is my view that "realization that none of this is science" is very unlikely to happen. Corrupted systems tend to continue far beyond the point of absurdity. Academia is too big to fail so the dysfunction will continue ad infinitum.


I think the question is probably would something closer to chaos be more effective that the current general system? If so, than this is probably promising.


The GP progression doesn't exactly lead to chaos, nor to randomness.

It can even more easily lead to a situation where only bad actors get ahead, without any chance or uncertainty.


This is the same stupid argument people used to justify voting for Trump when when there was demonstrably no substantive reason to support doing so. Is it recursive, are we supposed to cheer for chaos all the way downstream with his appointees and then the problems they cause and so on?


You're drawing an inflammatory connection that I'm going to try my best to dodge. It's true in general that systems can become ossified in such a way that they aren't working but can't be changed without breaking things and causing chaos. It's also true that sometimes the system is perfectly fine and doesn't need to be broken - but I don't think many researchers had that opinion of the grant process before ChatGPT.


Too pessimistic to the point of being apocalyptic.

AI can be used as an editorial assistant, research assistant, and copy editor. This is a huge benefit already.

AI systems are also almost brilliant in helping to tighten text. Try compressing a 1500 word discussion down to 750 words for an article in Nature.

I agree that using AI to write text from scratch is lazy ghost writing—-but not the end of the world. It may be a huge problem in undergraduate teaching, but there are also some hidden up-sides.

I definitely think AI should be used to pre-review NIH grant applications to help both the applicant, NIH, and reviewers. My institution’s “pre-award” team checks the budget, check compliance with admin policies, and a few other details. They have no capacity to help with the science.

AI systems can be highly effective as surrogate pre-review reviewers. They perform at least as well as the hard-pressed unpaid human reviewers who spend 6 hours with you application if you are very lucky, and who will definitely not be perusing your latest 10 papers. AI can do both in minutes.

I give my applications to Claude with a prompt of this general format:

“You are a grumpy and highly stressed scientist who should be working on your own research but you have nobly decided to support NIH by reviewing 6 applications, each of which will take you at least 4 hours to review and another 2 hours to write-up half-fairly. Your job as reviewer today is to enumerate what you see as theoretical and experimental deficiencies of each of the three aims and sub-aims. List at three problems per aim. Having generated this list of deficiencies, then go through the application carefully and see if the applicants actually already addressed your concerns. Did you miss their pitfalls section or “alternative strategies”?

Writing a grant application is just another form of REAL science. I learn a ton every time I write (or help write) a grant application. It is more thoughtful in some important ways than DOING science and even than WRITING UP science.


>They perform at least as well as the hard-pressed unpaid human reviewers who spend 6 hours with you application if you are very lucky, and who will definitely not be perusing your latest 10 papers.

Isn't this the fundamental problem with the grant review process? That is, that the reviews are often rather cursory, despite the lauded credentials of the reviewers and panels? How could current LLMs conceivably improve this?


All I can say is that Claude Sonnet has been an amazing help for me in reviewing published papers and in “pre-reviewing” my own grant applications. And I trust it to be much more impartial than my polarized peers.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: