Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My problem with takes like this is it presumes a level of understanding of intelligence in general that we simply do not have. We do not understand consciousness at all, much less consciousness that exhibits human intelligence. How are we to know what the exact conditions are that result in human-like intelligence? You’re assuming that there isn’t some emergent phenomenon that LLMs could very well achieve, but have not yet.




I'm not making a philosophical argument about what human-like intelligence is. I'm saying LLMs have many weaknesses that make in incapable of performing basic functions that humans take for granted. Like count and recall.

I go into much more detail here: https://news.ycombinator.com/item?id=45422808

Ostensibly, AGI might use LLMs in parts of it's subsystems. But the technology behind LLMs doesn't adapt to all of the problems that AGI would need to solve.

It's a little like how the human brain isn't just one homogeneous grey lump. There's different parts of the brain that specialize on different parts of cognitive processing.

LLMs might work for language processing, but that doesn't mean it would work for maths reasoning -- and in fact we already know it doesn't.

This is why we need tools / MCPs. We need ways of turning problems LLM cannot solve into standalone programs that LLMs can cheat and ask the answers for.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: