Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> people who stand to profit from "AI"

Some of them, sure, but that doesn't seem to be true of plenty of signatures to the open letter.

> the contents of this press release focus mostly on actual problems

Given that this is a set of entirely voluntary commitments made by the companies who you had thought were hyping x-risk in order to distract from actual problems... maybe it's worth updating your assessment of what the x-risk thing is about?



> maybe it's worth updating your assessment

Nope. Because I think the White House's press release is expressing the White House's view of what's important. That the White House didn't get distracted by the chaff says nothing about the chaff or those firing it off.


There's nothing in this release that companies didn't agree to as well. Which doesn't prove that they're fundamentally motivated by what's best for humanity but does maybe indicate that it wouldn't be hard for them to just stick to PR about shorter term safety concerns if no one working there (or externally) was genuinely concerned about longer term "x-risk" etc.


I see no reason to think this isn't them sticking to short-term PR concerns. For a while, the tech playbook with regulators is to say vague, positive things, grudgingly agree to some bare minimum, and then mostly ignore or sandbag on their commitments while lobbying aggressively behind the scenes.


Pretending to play along with "requests" to "keep AI safe" while ignoring the harms it will cause in other ways as the companies that own it try and gobble up literally all the money is 100% in line with the Sam Altman style of "I deserve to be the richest motherfucker on the planet" thinking


Do you think AI is capable of gobbling up "literally all the money" for its owners? If so, doesn't that suggest it's kinda dangerous?

FWIW I don't think these voluntary commitments are sufficient or address all of the important harms. But that's different than saying any government action is just "regulatory capture" and an attempt to keep open source models down. I'm only attempting to argue against the latter here, which I'm not sure is where you're coming from.


> Do you think AI is capable of gobbling up "literally all the money" for its owners?

Not particularly without barriers to competition in the space being artificially erected so as to enable that, which is the whole point of the industry-government game of footsie.


You're not the commenter I was addressing, but it sounds like your position is: AI will eat the entire economy but that's going to go fine as long as it's entirely unregulated so that the magic of the free market can fix any problems. Is that right? Or maybe you're not endorsing "literally all" in the literally-literally sense, in which case it would be helpful to spell out what value you think AI will capture (for monopolists or otherwise).


>AI is capable of gobbling up "literally all the money" for its owners?

No, I think wealthy and capital rich companies can leverage AI to extract value from places that already extract value simply by underpaying people and use stupid rhetoric and other methods that they are very familiar with to continue buying up basically everything as most normal people struggle to survive.

I explicitly do not think "AI" or even AGI has any inherent danger in itself, and only has danger in the same way any automation has in a capitalist system that also gives richer people more power in the justice system and the political system: Rich will get richer by squeezing the non-rich even harder, while politicians don't care because most of them genuinely believe in capitalism as an unabashed and incorruptible good and the misinformation and discourse control that passable bullshit generated with a single keystroke grants them way more power




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: