I personally don't dismiss or advocate for AI/LLMs, I just take what I actually see happening, which doesn't appear revolutionary to me. I've spent some time trying to integrate it into my workflow and I see some use cases here and there but overall it just hasn't made a huge impact for me personally. Maybe it's a skill issue but I have always been pretty effective as a dev and what it solves has never been the difficult or time consuming part of creating software. Of course I could be wrong and it will change everything, but I want to actually see some evidence of that before declaring this the most impactful technology in the last 100 years. I personally just feel like LLMs make the easy stuff easier, the medium stuff slightly more difficult and the hard stuff impossible. But I personally feel that way about a lot of technology that comes along though, so it could just be I'm missing the mark.
I think this is the observation that's important right now. If you're an expert that isn't doing a lot of boilerplate, LLMs don't have value to you right now. But they can acceptably automate a sizeable number of entry-level jobs. If those get flushed out, that's an issue, as not everyone is going to be a high-level expert.
Long-term, the issue is we don't know where the ceiling is. Just because OpenAI is faltering doesn't mean that we've hit that ceiling yet. People talk about the scaling laws as a theoretical boundary, but it's actually the opposite. It shows that the performance curve could just keep going up even with brute force, which has never happened before in the history of statistics. We're in uncharted territory now, so there's good reason to keep an eye on it.