Many people who claim that people don't understand how AI works often have a very simplified view of the short comings of LLMs themselves, e.g. "it's just predicting the next token", "it's just statistics", "stochastic parrot" and seems to be grounded in what AI was 2-3 years ago. Rarely have they actually read the recent research on interpretability. It's clear LLMs are doing more than just pattern matching. They may not think like humans or as well, but it's not k-NN with interpolation.
I guess I prefer to look at empirical evidence over feelings and arbitrary statements. AI ceos are notoriously full of crap and make statements with perverse financial incentives.