I really think the conference model is super detrimental to science. It's not like journals are perfect either, but revise and resubmit and desk rejects are a much better filter than continually resubmitting to the same few conferences over and over again. Not to mention that peer review in conferences is probably much lower quality than what you get in most journals (this is my impression anyhow, I don't know how one could quantify such a thing).
I'm in CS and I submit both to conferences and journals (the former because it's what people actually read, the latter because of evaluation requirements in my country). And I can tell you that (IMO of course) the conference model is immensely better, and idealization of journals in the CS community is a clear case of "grass is always greener".
Revise and resubmit is evil. It gives the reviewers a lot of power over papers that ends up being used for coertion, sometimes subtle, sometimes quite overt. In most papers I have submitted to journals (and I'm talking prestigious journals, not MDPI or the likes), I have been pressured to cite specific papers that didn't make sense to cite, very likely from the reviewers themselves. And one ends up doing it, because not doing it can result in rejection and losing many months (the journal process is also slower), maybe the paper even becoming obsolete along the way. Of course, the "revise and resubmit" process can also be used to pressure authors into changing papers in subtler ways (to not question a given theory, etc.)
The slowness of the process also means that if you're unlucky with the reviewers, you lose much more time. There is a fact that we should all accept: the reviewing process always carries a huge random factor due to subjectivity. And being able to "reroll" reviewers is actually a good thing. It means that a paper that a good proportion of the community values highly will eventually get in, as opposed to being doomed because the initial very small sample (n=3) is from a rejecting minority.
Finally, in my experience reviewing quality is the other way around... there is a small minority of journals with good review quality but the majority (including prestigious ones) it's a crapshoot, not to mention when the editor desk rejects for highly subjective reasons. In the conferences I typically submit to (*ACL) the review quality is more consistent than in journals, and the process is more serious with rejects always being motivated.
I agree there's tons of problems with journals as well, I think an entirely different system could probably be better. Even preprints with some sort of public facing moderated comments could be more effective.
However, I think this notion of a paper becoming "obsolete" if it isn't published fast enough speaks to the deeper problems in ML publishing; it's fundamentally about publicizing and explaining a cool technique rather than necessarily reaching some kind of scientific understanding.
>In the conferences I typically submit to (*ACL) the review quality is more consistent than in journals
I got to say, my experience is very different. I come from linguistics and submit to both *ACL as well as linguistics/cognition journals and I think journals are generally better. One of my reviews for ACL was essentially "Looks great, learnt a lot!" (I'm paraphrasing but it was about 3 sentences long, I'm happy for a positive review but it was hardly high quality).
Even in *ACL I find TACL better than what I've gotten for the ACL conferences. I just find with a slow review process a reviewer can actually evaluate claims more closely rather than review in a pretty impressionistic way.
That being said, there are plenty of journals with awful reviewing and editorial boards (cough, cough Nature).
I also used to submit to *ACL conferences exclusively, and was even a SAC, but got out of the academic game altogether. I'm still conducting research, but as an independent researcher for my own startup. The system seems to be more and more bananas over time and no one is willing to just force a change. It's really a tragedy of the commons.
That said, why don't conferences work like journals: if you're rejected you cannot resubmit. Find a new conference. That gets rid of the queuing problem. Yes, you'll have some amazing papers that will not be accepted by a top conference. So what, it happens to everyone. Plenty of influential papers were not published in a top conference in ML/AI/NLP.
The post suggests why review quality suffers. Because of the system, there is too much reviewing going on. People get tired and produce worse reviews. Those receiving these low-quality reviews become less motivated, and in turn put less effort into reviewing as well. Bad reviewing makes the system less predictable, so you have to spray and pray with as many papers as possible if you want to keep up with publication expectations. This adds even more papers into the system, making it worse.
There are just too many negative eventualities reinforcing each other in different ways.
not sure how many conferences you have been to, but (1) abstracts are filtered, since conferences probably get 100s of abstracts for every slot they have available, some of which end up being converted to workshops or breakout panel discussions to accommodate interesting topics, and (2) I've seen presentations where "lively" discussions have happened between the presenter(s) and the audience.