Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I assume you mean that all the LLM can do is produce text so it's not inherently dangerous, but it's rather trivial to hook an LLM up to controls to the outside world by describing an API to it and then executing whatever "commands"

Yes, you can do that, but the result is guaranteed to be silly.

The LLM isn't conceptualizing what it reads. That was already done when the human writing it used language patterns to encode their own conceptualization as data.

Instead, the LLM takes an implicit approach to modeling that data. It finds patterns that are present in the data itself, and manipulates that text alrong those patterns.

Some of the LLM's inferred patterns align to the language structure that was intentionally used by the human writing to encode a concept into that data.

Humans look objectively at the concepts they have in mind. From that perspective, we use logic or emotion to create new concepts. If a human could attach their mind to API endpoints, there would be no need to use language in the first place. Instead of encoding concepts into intermediary data (language in text) to send to a machine, they could simply feel and do the API calls.

LLMs don't look objectively at their model. They don't have a place to store concepts. They don't feel or do any arbitrary thing.

Instead, an LLM is its model. Its only behavior is to add new text and inferred patterns to that model. By modeling a new prompt, any familiar text patterns that exist in that prompt's text will be used to organize it into the existing model. A "continuation" essentially prints that change.

When you attach that to API endpoints, the decision making process isn't real. There is no logically derived new concept to determine which API call to call. Instead, there is a collection of old concepts that were each derived logically in separate unrelated contexts, then encoded into language, and language into text. Those are just being recycled, as if their original meaning and purpose is guaranteed to apply, simply because they fit together like puzzle pieces. Even if you get the shape of them right (by following the patterns they are encoded with) there is no place in this process to introduce why, or to decide the result is nonsense and avoid it.

In short, the LLM can be made to affect the world around it, and the world can affect it back; but there is nothing in between it being affected, and it affecting the world. No logic. No intent. Only data.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: