Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They're behind Anthropic and were behind openai being a nonprofit. They're behind the friendly AI movement and effective altruism.

They're responsible for funneling huge amounts of funding away from domain experts (effective altruism in practice means "Oxford math PhD writes a book report about a social sciences problem they've only read about and then defunds all the NGOs").

They're responsible for moving all the AI safety funding away from disparate impact measures to "save us from skynet" fantasies.



I don't see how this is a response to what I wrote. Can you explain?


I think GP is saying that their epistemic humility is a pretense, a pose. They do a lot of throat clearing about quantifying their certainty and error checking themselves, and then proceed to bring about very consequential outcomes anyway for absurd reasons with predictable side effects that they should have considered but didn't.


Yeah. It's not that they never express uncertainty so much as they like to express uncertainty as arbitrarily precise and convenient-looking expected value calculations which often look like far more of a rhetorical tool to justify their preferences (I've accounted for the uncertainty and even given a credence as low as 14.2% I'm still right!) than a decision making heuristic...


What ngo's are the rationalists or effective altruists responseable for killing or defunding?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: