Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> We've had inquisitions historically, I would very much support a global AI inquisition.

You know how inquisition usually goes. "You've been playing starcraft against 'the AI'. What do you have to say in your defense?"

Maybe I'm just weird, but "a tech-based superintelligence emerges and decides to destroy humanity because it was in a bad mood" is something I'm totally fine with. Doom-Cults on the other hand I don't like at all.



> Maybe I'm just weird, but "a tech-based superintelligence emerges and decides to destroy humanity because it was in a bad mood" is something I'm totally fine with.

It seems important to reflect on what you’re really saying here, and to ask how a) this is fine, and b) how “weird” sufficiently sums up the stance.


I'm fine with humans having outcompeted the neanderthals and whatever else there was. I'm just as fine with someone else doing it to us, at least if it's someone or something that is smarter than us and not some virus or bacteria that kills us all (which would be kind of lame). If an artificial superintelligence wants to take over the flame of progress, I'm fine with that.

But I'm also totally fine with considering that stance weird (or whatever word you'd prefer). I'm aware that others view these things very differently, just as others are much, much more worried about their individual demise than I am worried about mine (or theirs).


Those are not similar/comparable outcomes and I think there are two major factors that can’t be hand-waved away:

1. Consciousness. Despite advances in “intelligence”, we still have a very limited understanding of what makes us conscious, and whether or not consciousness can emerge from machines.

If machines are not conscious and are all that remain, I’d argue everything that could be construed to have value by humans is lost, and nothing from that point forward could be considered “progress”.

2. Suffering. “Winning” on an evolutionary timescale looks nothing like the failure modes of machines taking over. The reality of this scenario is a rather grim one, and not at all like the slow emergence, competition and eventual extinction of biological species.

And depending on #1, the true tragedy of #2 begins to take shape.

I think it’d be more apropos to frame this as humanity collectively committing suicide rather than some notion of the future of progress.

If consciousness is the universe experiencing itself, what you’re describing sounds like a kind of universal death.

Of course we can’t know what consciousness really is (or if earth is the only place it exists), but that seems like all the more reason to take these problems seriously.


Or if consciousness is more than an illusion. Fair points. I'm not sure if you can have a general intelligence without some form of consciousness. I don't believe in gods or souls, so I lean towards us not being special.

As for the how, I agree with you. I'd prefer it to not be terminators crushing heads under their iron feet while allowing us just enough room to run and hide and live in constant terror for centuries. But I doubt it will be, the power delta will be too large. It'll be like a game of Civilization where Gandhi is advancing on you with modern tanks and fighter planes while you've barely discovered the wheel.


I think a lot of this stuff is probably just a coin flip, because we just don’t know.

I find it odd, however, when people, who seemingly accept an ASI will be developed, focus the downside risk on extinction of us as a species. Sure, that’s a risk. You know something else an ASI would likely have the ability to do, (or at least, one of its progeny)? Keep you alive and torture you for eons in weird and surreal ways.

Why is the downside always focused on paltry, meaningless things (relatively speaking) like extinction of life on earth?


Hell is real, and we are the gods that made it.

(Is how I imagine we could eventually reflect depending on how all of this goes).


> Keep you alive and torture you for eons in weird and surreal ways.

Why would it though? When people are annoyed by a bug they crush it, they don't spend their life setting it up in a torture chamber. There's the movie psychopath that tortures insects and animals and eventually humans, but I don't think it's their intellect that drives their sadism.

At worst, I imagine we'd be lab rats, quite literally the way we treat lab rats today. But with a superintelligence far beyond our abilities that does not care about us besides as a potential threat, why would it need us for testing?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: