I don't think argument by assertion is appropriate where there's a lot of people who "clearly" believe that it's a good approximation of human intelligence. Given we don't understand how human intelligence works, asserting that one plausible model (a continuous journey through an embedding space held in neurons) that works in machines isn't how humans do it seems too strong.
It is demonstrably true that artificial neurons have nothing to do with cortical neurons in mammals[1] so even if this model of human intelligence is useful, transformers/etc aren't anywhere close to actually implementing the model faithfully. Perhaps by Turing completeness o3 or whatever has stumbled into a good implementation of this model, but that is implausible. o3 still wildly confabulates worse than any dementia patient, still lacks the robust sense of folk physics we see in infants, etc. (This is even more apparent in video generators, Veo2 is SOTA and it still doesn't understand object permanence or gravity.) It does not make sense to say something is a model of human intelligence if it can do PhD-level written tasks but is outsmarted by literal babies (also chimps, dogs, pigeons...)
AI people toss around the term "neuron" way too freely.
I don't think argument by assertion is appropriate where there's a lot of people who "clearly" believe that it's a good approximation of human intelligence. Given we don't understand how human intelligence works, asserting that one plausible model (a continuous journey through an embedding space held in neurons) that works in machines isn't how humans do it seems too strong.