Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that's the same kind of attitude that makes a lot of people not take global warming seriously.

It's a way to process ideas you don't want to be true, sure, but it's not a sensible or cost-effective way to deal with potential threats.

(And yeah, you can argue AI x-risk isn't a potential threat because it's not real or whatever. That's entirely orthogonal to the "if it's true we're fucked so don't bother" line of argument.)




Except AI risk doesn't even have plausible models of the danger. It has speculative models: "AI would be able to hack anything" is about as far as the thinking seems to go. It's not grounded in analysis of processing power or capabilities, or rates of exploit discovery in software - or psychological testing of users to determine susceptibility to manipulation.

Global warming on the other hand is heavily grounded in model building - we're building models all the time, taking measurements, hypothesizing and then testing our models with reference to future data recovery. We have simulations of the effect of increased energy availability for specific climate systems, we have estimations of model accuracy over different length scales.

Where is this sort of analysis for AI safety? (if it exists I'm genuinely interested, but I just don't see it).


Have you ever tried to make a prediction model with more than 1 parameter?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: