Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How do you define "LLMs don't understand concepts"?

How do you define "understanding a concept" - what do you get if a system can "understand" concept vs not "understanding" a concept?



Didn't Apple had a paper proving this very thing, or at least addressing it?


That's a good question. I think I might classify that as solving a novel problem. I have no idea if LLMs can do that consistently currently. Maybe they can.

The idea that "understanding" may be able to be modeled with general purpose transformers and the connections between words doesn't sound absolutely insane to me.

But I have no clue. I'm a passenger on this ride.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: