Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well i don't see why we need to mangle the jargon. "Language model" has an old meaning from NLP (which still applies), as a computer model of language itself. Most commonly, a joint probability distribution over words or sequences of words, which is what LLMs are too. Prompted replies are literally conditional probability densities conditioned on the context you give it. "Foundation model" is a more general term I see a lot.

To say a model is "just a LLM" is to presumably complain that it has no added bells or whistles that someone thinks is required beyond the above statistical model. And maybe I missed the point, but the author seems to be saying "yes it's just a LLM, but LLMs are all you need".



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: