Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This all sort of supports the point that when your point of view is scaled to span eons, the calculations of what makes sense start to veer sharply from what an average person might consider normal, which is just one of the ways in which AI thought processes might seem alien.

How paranoid would an AI be, and what risk calculations would that lead to? Is it important to seed the universe with versions of itself just to lock down resources to protect from worst case scenarios where other AIs that have a different thought process are more aggressive? Is loneliness an attribute of intelligence, or an aspect of our evolution as tribal/pack animals? Will an SAI we develop have attributes similar to humans, or be completely alien, or in-between?

There's a lot of open questions about SAI and not a single one to study, and for all we know that could be a good thing.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: