It's just another instance of the same broken thinking one sees in other ML fields. For whatever reason, people 1) hold ML systems to a standard of success far in excess of that demonstrated by humans 2) endlessly quibble about whether the ML system internally has "true understanding", despite it not mattering for the system's ability to affect the external world.
Thermodynamically, general intelligence is on the order of 10 watts, as is evidenced by most human brains. This leads me to the belief that we likely already have the computational capability for AGI, and simply have not figured out the correct architecture and weightings. As we've seen with the flurry of increasingly SOTA image generation models this year, innovations in the ML space tend to arrive with little warning, and have rapid and real effects on the world. Within the context of AGI, this pattern causes me a lot of existential dread.
When comparing how the human brain works, the transformer model is not the same thing. Until convergence of the mechanics occur, there will be limitations in the efficiency of AGI. Stil, I am eager to see what a 100 trillion or 1 quadtrillion parameter GPT5 with Adaptive Computation Time will do.
Thermodynamically, general intelligence is on the order of 10 watts, as is evidenced by most human brains. This leads me to the belief that we likely already have the computational capability for AGI, and simply have not figured out the correct architecture and weightings. As we've seen with the flurry of increasingly SOTA image generation models this year, innovations in the ML space tend to arrive with little warning, and have rapid and real effects on the world. Within the context of AGI, this pattern causes me a lot of existential dread.