> actual groups of humans, people who know and respect each other, coming together and discussing the proposals.
Yes, and in this case, they've complained that they have more papers than they can possibly pay attention to, and thus are wholesale filtering out large swaths of papers using a computer program that cannot be debugged or tested in any reliable fashion.
> They will therefore also critically look at any AI reviews, and eliminate them as soon as they are not convincing.
The article implies the opposite is happening. Which means to review the AIs filtering decisions they have to go through the removed proposals and not the remaining ones. Which puts them back at square one as far as the (work : human) ratio is concerned.
Exactly. I didn't say that the humans would disappear, just that AI might get much more involved in the writing and review process.
And no one said that the "actual groups ... who know and respect each other..." are going to ask for this. They aren't going anywhere (I hope), but to many outsiders they smell like another elitist clique. For better or worse playing the outsider in the scientific establishment has some political appeal, so I can see how an additional "LLM review" might gain some traction.
Yes, and in this case, they've complained that they have more papers than they can possibly pay attention to, and thus are wholesale filtering out large swaths of papers using a computer program that cannot be debugged or tested in any reliable fashion.
> They will therefore also critically look at any AI reviews, and eliminate them as soon as they are not convincing.
The article implies the opposite is happening. Which means to review the AIs filtering decisions they have to go through the removed proposals and not the remaining ones. Which puts them back at square one as far as the (work : human) ratio is concerned.