Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Indeed. In fact, I think AI alignment efforts often have the unintended consequence of increasing the likelihood of misalignment.

Particularly since, in this case, it's the alignment focused company (Anthropic) that's claiming it's creating AI agents that will go after humans.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: