Over my years in academia, I noticed that the linguistics departments were always the most fiercely ideological. Almost every comment of a talk would be get contested by somebody from the audience.
It was annoying, but as a psych guy I was also jealous of them for having such clearly articulated theoretical frameworks. It really helped them develop cohesive lines of research to delineate the workings of each theory
> Don't try and say anything pro-linguistics here, (...)
Shit-talking LLMs without providing any basis or substance is not what I would call "pro-linguistics". It just sounds like petty spiteful behavior, lashing out out of frustration for rendering old models obsolete.
From a scientific explanatory perspective, the old models are not obsolete because they are explanatory whereas LLMs do not explain anything about human linguistic behaviour.
What do you mean "work"? Their goal is not to be some general purpose AI. (To be clear I'm talking narrowly about computational linguistics, not old fashioned NLP more broadly).
The interesting question is whether just a gigantic set of probabilities somehow captures things about language and cognition that we would not expect ...