Hacker News new | past | comments | ask | show | jobs | submit login

Many people who claim that people don't understand how AI works often have a very simplified view of the short comings of LLMs themselves, e.g. "it's just predicting the next token", "it's just statistics", "stochastic parrot" and seems to be grounded in what AI was 2-3 years ago. Rarely have they actually read the recent research on interpretability. It's clear LLMs are doing more than just pattern matching. They may not think like humans or as well, but it's not k-NN with interpolation.





Apple recently published a paper that seems to disagree and plainly states it's just pattern matching along with tests to prove it.

https://machinelearning.apple.com/research/illusion-of-think...


Anthropic has done much more in depth research actually introspecting the circuits: https://transformer-circuits.pub/2025/attribution-graphs/bio...

I'm having a hard time taking apple seriously, when they have don't even have a great llm.

https://www.techrepublic.com/article/news-anthropic-ceo-ai-i... Anthropic CEO: “We Do Not Understand How Our Own AI Creations Work”. I'm going to lean with Anthropic on this one.


I guess I prefer to look at empirical evidence over feelings and arbitrary statements. AI ceos are notoriously full of crap and make statements with perverse financial incentives.

> I have a hard time taking your claim about rotten eggs seriously when you're not even a chicken.

A lot of the advancement boils down to LLMs reprompting themselves with better prompts to get better answers.

Like an inner conversation? That seems a lot like how I think when I consider a challenging problem.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: