Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's a plausible argument for it, so it's not a crazy thing. You as a human being can also predict likely completions of partial sentences, or likely lines of code given surrounding lines of code, or similar tasks. You do this by having some understanding of what the words mean and what the purpose of the sentence/code is likely to be. Your understanding is encoded in connections between neurons.

So the argument goes: LLMs were trained to predict the next token, and the most general solution to do this successfully is by encoding real understanding of the semantics.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: