Hacker News new | past | comments | ask | show | jobs | submit login

My attitude about safety of AI is this: if AI is an existential risk dangerous enough to justify draconian measures, we're ultimately fucked, since those measures would have to be perfect for all future times and places humans exist. Not a single lapse could be allowed.

And it's just not plausible humanity could be that thorough. So, we might as well assume AI is not going to be that dangerous and move ahead.




Your first paragraph is exactly how I feel about nuclear weapons, to put it into context. I don’t think the logical conclusion from that viewpoint is that nuclear weapons aren’t that dangerous so we should just move ahead.


The difference is that it is somewhat feasible to control access to the materials necessary to produce nuclear weapons.

It is not remotely feasible to control access to computing devices.


I don't think nuclear weapons are the kind of existential risk that AI doomsters imagine for AI.


Other than those that have called for nuking AI datacenters.


I feel obligated to point out that nobody has argued for nuking datacenters; the most radical AI existential-safety advocates have argued for is "have a ban on advanced training programs, enforced with escalating measures from economic sanctions and embargos to, yes, war and bombing datacenters". Not that anybody is optimistic on that idea working.


That presumably demonstrates they think nuclear war is less dangerous than AI.


I think it has been empirically demonstrated that lapses in regards to the control and use of nuclear weapons can occur without the destruction of humanity.

(I am not an AI doomer, nor do I feel that nuclear weapons are not dangerous/should be less controlled)


I think that's the same kind of attitude that makes a lot of people not take global warming seriously.

It's a way to process ideas you don't want to be true, sure, but it's not a sensible or cost-effective way to deal with potential threats.

(And yeah, you can argue AI x-risk isn't a potential threat because it's not real or whatever. That's entirely orthogonal to the "if it's true we're fucked so don't bother" line of argument.)


Except AI risk doesn't even have plausible models of the danger. It has speculative models: "AI would be able to hack anything" is about as far as the thinking seems to go. It's not grounded in analysis of processing power or capabilities, or rates of exploit discovery in software - or psychological testing of users to determine susceptibility to manipulation.

Global warming on the other hand is heavily grounded in model building - we're building models all the time, taking measurements, hypothesizing and then testing our models with reference to future data recovery. We have simulations of the effect of increased energy availability for specific climate systems, we have estimations of model accuracy over different length scales.

Where is this sort of analysis for AI safety? (if it exists I'm genuinely interested, but I just don't see it).


Have you ever tried to make a prediction model with more than 1 parameter?


I guess but its similar how diplomqcy and international agreements need to be perfect forever to prevent nuclear war but so far it has worked and its worth it to keep trying.


Good AI's may outcompete bad AI's, so that it's only a matter of making sure the good one arrives first.


That's the solve. We'll never succeed at stopping progress, so the best we can do is make sure the ones who get it first are the "good guys". It's the same as nuclear weapons.


At that point, there's nothing left. Will you fight, or will you perish like a dog?

However, you don't have to work on the nastiest parts of the probability space, as no one has a perfect world model, and improving other scenarios should reduce the overall extinction probability estimate.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: