Hacker News new | past | comments | ask | show | jobs | submit login

Question is, how autonomous decision making works, nobody argues that llm can finish any sentence, but can it push a red button?





Of course it can push a red button. Trivially, with MCP.

Setting up a system to make decisions autonomous is technically easy. Ensuring that it makes the right decisions, though, is a far harder task.


So it can push _a_ red button, but not necessarily the _right_ red button



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: