Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The difference is that when AI exhibits behavior like that, you can refine the AI or add more AI layers to correct it. For example, you might create a supervisor AI that evaluates when more requirements are needed before continuing to build, and a code review AI that triggers refinements automatically.


Question is, how autonomous decision making works, nobody argues that llm can finish any sentence, but can it push a red button?


Of course it can push a red button. Trivially, with MCP.

Setting up a system to make decisions autonomous is technically easy. Ensuring that it makes the right decisions, though, is a far harder task.


So it can push _a_ red button, but not necessarily the _right_ red button




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: