Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Commenter is merely saying that LLMs indeed are able to approximate arbitrary functions exemplified through sorting.

It is nothing new and has been well established in the literature since the 90s.

The shared article really is not worth the read and mostly uncovers an author who does not know what he write about.



You’re talking specifically about perceptrons and feed forward neural networks.

LLMs didn’t exist in then. Attention only came out in 2017…


Yes? Are you saying that attention is less expressive?


I’m saying that LLMs (models trained on language specifically) are not automatically capable of the same generic function solving.

The network itself can be trained to solve most functions (or all, I forget precisely if NNs can solve all functions)

But the language model is not necessarily capable of solving all functions, because it was already trained on language.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: