Hacker News new | past | comments | ask | show | jobs | submit login

For LLMs we have "for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments".



For either option you can trace the intention of the definitions to "was it a human coding the decision or not". Did a human decide the branches of the literal or figurative "if"?

The distinction is accountability. Determining whether a human decided the outcome, or it was decided by an obscure black box where data is algebraically twisted and turned in a way no human can fully predict today.

Legally that accountability makes all the difference. It's why companies scurry to use AI for all the crap they want to wash their hands of. "Unacceptable risk AI" will probably simply mean "AI where no human accepted the risk", and with it the legal repercussions for the AI's output.


This would be an excellent outcome (and probably the one intended).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: