Go for the jugular. Impact the career of people putting out substandard papers.
Come up with a score for "citation strength" or something.
Any given bad actor with too many substandard papers to his/her credit begins to negatively impact the "citation strength" of any paper on which they are a co-author. Maybe even negatively impacting the "citation strength" of papers that even cite papers authored or co-authored by the bad actor in question?
If, say, the major journals had a system like this in place, you'd see everyone perk up and get a whole lot less careless.
It doesn't address the core issue. It's credential inflation.
Not sure that enough people understand that the vast vast majority of research papers are written in order to fulfil criteria to graduate with a PhD. It's all PhD students getting through their program. That's the bulk of the literature.
There was a time when nobody went to school. Then everyone did 4 years elementary to learn reading, writing and basic arithmetic. Then everyone did 8 years, which included more general knowledge. Then it became the default to do 12 years to get to the high school diploma. Then it became default to do a bachelor's to get even simple office jobs. Then it's a masters. Then to actually stand out now in a way that a BSc or MSc made you stand out, you need a PhD. PhD programmes are ballooning. Just as the undergrad model had to change quite a bit when it went from 30 highly-motivated nerds starting CS in a year vs. 1000. These are massive systems, the tens or hundreds of thousands of PhD students must somehow be pushed through this system like clockwork. Just for one conference you get tens of thousands of authors submitting similar amount of papers and tens of thousands of reviewers.
You can't simply halt such a huge machine with a few little clever ideas.
The number of new PhDs in the US ballooned in the 1960s. It went from 4.9 new PhDs / 100k people in 1960 to 14.5 in 1970 and 17.1 in 2024.
While the growth in the number of new PhDs has been modest, the number of published papers has grown much faster. I would attribute that to changes in administrative culture. Both the government and the universities have become driven by metrics, which means everyone must produce something the administrators can measure.
I would imagine there is more recent ballooning in AI topics, which was the focus of the article. Based on the growth of the flagship conferences it's quite evident that it's not just exponentially more frequent paper submissions but indeed more exponentially more PhD students submitting. (I mean ML/AI/NLP/vision conferences over the last ~15 years).
Actually, the problem is pricing. If we could identify and correctly value new concepts, then we can dispense with citations and just use the correct sum of concept valuations. Perhaps a correctly designed futures market would not only solve getting the right PhD students the right jobs, but bring a lot of speculative capital into fundamental research?
That's a very economics-minded approach. Also, I'm not quite sure what the futures would be about. That a paper will... get N citations? get a job for the first author? Achieve N stars on GitHub? N likes on social media? Be patented and put in a product? Turn X USD in profit? Bet on retraction? Bet on acceptance? On awards? Or replicability?
The first question is what scientific research is actually for. Is it merely for profitable technological applications? The Greek or the humanistic or the enlightenment ideal wasn't just that. Fundamental research can be its own endeavor, simply to understand more clearly something. We don't only do astronomy for example in order to build some better contraption and understanding evolution wasn't only about producing better medicine. But it's much harder to quantify elegance or aesthetics of an idea and its impact.
And if you say that this should only be a small segment, and most of it should be tech-optimization, I can accept that, but currently science runs also on this kind of aesthetic idealist prestige. In the overall epistemic economy of society, science fills a certain role. It's distinct from "mere" engineering. The Ph in PhD stands for philosophy.
The question 'what is science actually for' can be sidestepped. Everyone can agree that it has value, albeit we disagree on the actual value...this is why you need a market. As to how things get priced in such a market, this is a subject for further research...To start, it just needs to tie to something measurable. Heck we've created memecoins with far less backing. Also, we've carved up the conceptual space on a very course grained level with patents, we just need a more immediate, and granular system for doing so...
It would be a disproportionate blow to researchers in countries with less resources and/or more bureaucratized systems (which e.g. demand to see a "result" if you have paid a fee), who just wouldn't submit.
Or if the problem is bad papers, a fee that is returned unless it’s a universal strong reject.
Or if you don’t want to miss the best papers, a fee only for resubmitted papers?
Or a fee that is returned if your paper is strong accept?
Or a fee that is returned if your paper is accepted.
There’s some model that has to be fair (not a financial burden to those writing good papers) and will limit the rate of submissions.
Thoughts?