Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> As I understand them, LLMs right now don’t understand concepts.

In my uninformed opinion it feels like there's probably some meaningful learned representation of at least common or basic concepts. It just seems like the easiest way for LLMs to perform as well as they do.



Humans assume that being able to produce meaningful language is indicative of intelligence, because the only way to do this until LLMs was through human intelligence.


Yep. Although the average human also considered proficiency in mathematics to be indicative of intelligence until we invented the pocket calculator, so maybe we're just not smart enough to define what intelligence is.


Sorry if I'm being pedantic, but I think you mean arithmetic, not mathematics in general.


Not really, we saw this decades ago: https://en.wikipedia.org/w/index.php?title=ELIZA_effect


I don't think I'm falling for the ELIZA effect.* I just feel like if you have a small enough model that can accurately handle a wide enough range of tasks, and is resistant to a wide enough range of perturbations to the input, it's simpler to assume it's doing some sort of meaningful simplification inside there. I didn't call it intelligence.

* But I guess that's what someone who's falling for the ELIZA effect would say.


Your uninformed opinion would be correct

https://www.anthropic.com/news/golden-gate-claude




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: