Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
White House Secures Commitments from Leading AI Companies to Manage AI Risks (whitehouse.gov)
61 points by apsec112 on July 21, 2023 | hide | past | favorite | 95 comments


Team America: World Police (2004)

Lisa: Promise me you'll never die.

Gary Johnston: You know I can't promise that.

Lisa: If you did that, I would make love to you right now

Gary Johnston: I promise I'll never die.

https://www.youtube.com/watch?v=aplSQGHPmvI

Lisa Secures Commitments from Gary to Never Die.


Looks like the companies all agreed to do what they were doing already. Nothing about training data transparency , nothing about a common definition of risk or standards for compliance.

The public have all heard “AI is going to destroy humanity” but those statements have been short on specifics.

I have been asked by many people to explain how AI could kill us all. How about some clarity for common people.


It is likely to be similar to humans vs. other primates. Step 1 is establish a position of complete military dominance through superior communication, resource utilisation and technology. Step 2 is wipe out anyone who competes for resources.

Basically the argument is if anyone, anywhere creates an AI that has the same basic drives as a human (preserve yourself, replicate, preserve things that look like you) it'll start competing with us for resources. AIs have demonstrated the ability to crush humans at every competitive game we can play with them, so they'll crush us at that game too. It looks a bit far fetched in 2023 but it is actually pretty easy to see it happening. Hardware progresses exponentially for a few more years. Some war-torn state gets desperate, starts deploying military AIs to try and get an edge, loses control, yada yada. Or economic pressure starts engineering humans out of the loop in one too many places and evolutionary pressures happen by accident to develop an intelligent autonomous agent that wants to replicate.

Humans struggled with a cold virus recently and couldn't stamp it out. Something that can outdo us research-wise would not be something we can handle. Whether humanity has control of corporations is an open question given the number of people who get killed if it is profitable to do so. We've been luck that corporations need people to act as a brain. AIs don't need that.


AI is really bad at plugging itself in after it is unplugged.


That is like a gorilla grunting that humans are weak and will die after it hits us with its fist. Which is to say, that is completely true. But if you're up against an opponent who is smarter than you it might never come down to a fistfight.

Every big gorilla on the planet could kill me. None of them can outcompete me for something, because I'm not stupid enough to challenge a big gorilla to a fistfight. There won't be a fight until after I've got my hands on a gun and the gorilla is an easy shot. There probably won't need to be a fight because I'll just trade for what I want using tactics that a gorilla cannot comprehend the effectiveness of.

There is a pretty good chance humans won't even realise they are competing with an AI until they are locked out from having options. I'm not even sure if most gorillas realise to this day that we humans will crush them if they ever try anything that threatens us.


Do you think this is an unsolvable problem for AI? In a world where humans are already being tricked by AI-generated deepfakes and robots can play ping pong and soccer, just to gesture at a couple of possible approaches it might take?


Link to ai playing ping pong ?

Edit: found one. https://youtu.be/u3L8vGMDYD8

I don’t think ping pong bots that can beat humans are here yet.


> The public have all heard “AI is going to destroy humanity” but those statements have been short on specifics.

Honestly, my understanding is that's mainly a distraction from the much more realistic:

> AI is going to take your job and impoverish your family while making the rich even richer. But cheer up! You'll be able to talk to a chatbot therapist about your problems, from your cardboard home under a bridge, so it's progress! In the past, not even kings had chatbot therapists! You'll have it better than a king!


I'm sorry, but as an AI language model, I am not qualified to give you advice on how to improve the build structure of your cardboard home.

I advise you to speak to a qualified professional cardboardpenter or simply to go stand in the begging que in front of our headquarters in San Francisco.

Feel free to take your home with you, given how long the line is.


That's cuz Theory of Bounded Rationality applies.

When we are passed the point where ppl with their 3 inch brains can actually come up with answers to complex problems, you don't spend time asking them questions.


> I have been asked by many people to explain how AI could kill us all.

"Colossus The Forbin Project", "Terminator"

I.e. giving control of weapons to an AI.


>I have been asked by many people to explain how AI could kill us all. How about some clarity for common people.

There are countless resources to find this stuff out. To be clear there may not be a level at which common people understand why e.g. Fission Chain Reactions are dangerous but they can grok why a giant bomb that spreads radiation is bad.

For Starters: https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a...

>The public have all heard “AI is going to destroy humanity” but those statements have been short on specifics.

You can no more explain how Stockfish is going to beat you at chess then you can predict how an AGI would defeat humanity. Nevertheless the outcome is nearly certain.


You have to find a different preacher than Yud, he has zero capacity to explain himself to skeptics.


Robert Miles is a much more effective communicator: https://www.youtube.com/watch?v=kMLKbhY0ji0


Thanks, I'm a half hour in and will keep listening; he's more soft spoken and relatable sure but he still strikes me as a vague doomer who's not trying to explain why he's confident humans will lose, just that he's confident.

@30:24, on what solutions might be realistic to help, if not a 6 month sanction, Miles says:

"Maybe what we should be asking for is just an enormous amount of money to do alignment research with"

Sounds like a classic doomsday cult, end is nigh, deposit your checks to this account...


Ah yes all that research money will be used to buy mansions and ferarris. Do you have a clue how this stuff works in real life?

Also note that until maybe the last year not only was there not a great financial incentive to be a "doomer" but it would actively hurt your career if you were. Most of the main people in the scene have been sounding this alarm for years sometimes decades. It's also difficult to explain people like Geoff Hinton or Yoshua Bengio joining and leaving behind high profile and highly lucrative positions. Yann LeCun, a staunch anti-doomer is perhaps the perfect example who has actually DOES have enormous financial incentive to play down AI dangers.


That's the same as saying "through God all things are possible". Yes if you skip over the hard step of creating an omniscient, omnipotent being, then you have the problems of dealing with an omniscient, oomnipotent being.


So the fact that thousands of researchers with billions of dollars behind them to build this omnipotent being means nothing to you? Because they all certainly think it's achievable.


if you think about it, its actual insanity. The fears are all from science fiction with absolutely no grounding in reality. There are legitimate concerns about the unethical mass harvesting of data but if you think about "AI safety" vs something like vehicular safety or spacecraft safety its kind of nuts, because the latter are grounded in reality and have formal specifications which need to be met. But AI safety is so wishy-washy.

Take a look at https://en.wikipedia.org/wiki/Federal_Motor_Vehicle_Safety_S...

You'd think if people are so scared about AI being apocalyptic it would have some strict regulation similar to vehicle standards but its not even close, nor is it in the conversation at all. Its just pure fear from science fiction and the govt putting out performative press releases. What the fuck even is "AI" anyway? it means different things to different people. It could mean a self checkout at a supermarket, or a bank's AML, or some super intelligence. there are basically no standard definitions of anything to do with "AI" anywhere and every single piece of regulation is equally as vague. Companies like having it be a vague term because its marketing so the actual experts don't care about changing it


> What the fuck even is "AI" anyway?

Indeed.

For most lay people I've talked to -- family, friends, and acquiescence outside of tech -- the answer ultimately boils down to "automation that can do my job or can do the job of people I care about or can do the job of people who would then compete for my job". Sometimes also "algorithms" a la social media feeds and so on.

Ie, the lay answer to "what is AI?" has recently morphed into roughly "computer programs that I am worried about or don't like".

For most actual AI researchers I've talked to, you get the standard tounge-in-cheek response ("AI is whatever gets published at AI conferences and funded by AI investors") and then, when you press, it's roughly something like ML+CV+NLP+optimization and sometimes but not always other fields like automated reasoning or parts of robotics get added. Roughly, most of the parts of CS that aren't theory or systems or HCI or software engineering or already closed.

For most techno-babblers in the podosphere who have OPINIONS on super-intelligence but couldn't pass an internship phone screen to save their lives, it's superintelligence and other stupid hollywood bullshit.


For sure. AI is a marketing term.

But in some sense, so it "tech". The paperclip is a highly evolved technology, for example, [1], but working on those doesn't mean you work "in tech". Tech meant something like, "the new stuff that is impacting our lives but we don't know how to handle". So a CTO's job generally doesn't include the paperclips, the copiers, or the company cars, however technological they are.

And as with any shiny new marketing term, others will quickly rush in. There was the craze for radioactivity, which resulted in a bunch of radioactive patent medicines: https://en.wikipedia.org/wiki/Radioactive_quackery

But even more interesting to me is the extent to which it was used as a pure marketing term, with no radioactivity expected. E.g.: https://lucyjanesantos.com/a-batschari-radium-cigarettes/

So I'm sure we'll be seeing all sorts of things branded "AI" even when they don't use any of the technologies involved. With no trademark on the term or organization to defend it, it's open season for all the sketch marketers.

[1] For those who doubt, Petroski's "The Evolution of Useful Things" will set you straight: https://www.amazon.com/Evolution-Useful-Things-Artifacts-Zip...


I think a good definition of AI might be things that can’t be done (or at least we don’t know how to do them) with traditional deterministic programming techniques (that is, humans write code that directly tells the computer what to do).

Things like traditional chess programs (i.e., those that don't use neural networks/ML) are maybe a borderline case, as while they are just tree search with a bunch of human-tuned evaluation heuristics, the search depth is so huge that it's difficult for humans to explain why they take certain actions.


I think that's a plausible start at a definition, but I think the ship has sailed on the term having a real definition at all.

Once marketers get hold of a term, it doesn't really matter what people who like definitions or precision think. As an example, I was active in the Agile movement before that term was coined. When I've talked recently with the early-days people I keep in touch with, we're all kinda horrified by what the term has come to mean. And it's not like we didn't fight along the way for clarity. But most people just don't care about the precise use of a term when there's a profitable (mis)use of the term.


If you look at actual quotes from people like Sam Altman, Ilya Sutskever, Demis Hassabis, etc. the AI, that is to say really the AGI, that they want to build is a godlike super intelligent being. To many researchers in the field this is their end goal.


And Minsky before them.


And Elon Musk wants to die on Mars. It doesn't matter what their "goal" is.


Please read and study my comments in this thread carefully. Superintelligence is not Hollywood bullshit.

AI doesn't need to be alive or animal-like to be superintelligent. We have many examples getting more and more general purpose. Look at AlphaGo, AlphaStar. Both are superintelligent in a somewhat narrow way (but trending towards less narrow).

GPT-4 is superintelligent in some ways already such as it's breadth of knowledge. And we have to anticipate that it will continue to get smarter and better at reasoning and much, much faster. Over the last ten years AI performance has increased by 100000 to 1000000 depending on how you measure it. It will continue to accelerate.

BTW, as far as internships etc., I started programming 38 years ago when I was 7. I've built a ton of software with numerous technologies, including recently some things like: a multilayer perceptron (from scratch). And a data analysis tool that uses GPT to write complex SQL and create charts on the fly to answer user requests. Also software to automatically create custom websites (including imagery) based on short descriptions, or automatically write, test and debug scripts on servers. And all of that outputs at superhuman speed already. And before that I worked on dozens of other complex projects.

Point being, I understand technology and AI, and it is ludicrous to view superintelligence as "stupid bullshit".


Consider this: There are people much much smarter than you RIGHT NOW that you do not listen to or refuse to agree with. There are clear statistics RIGHT NOW about things that you choose to disbelieve. If you don't think this is true, then it is even more the reality. The idea that a very smart "AI" can magically convince anyone of anything is completely unsubstantiated and made entirely up, by people basically inventing a new god and religion. Everything about the human brain is built around rationalizing what you already want into whatever system you pretend to believe in and use, and an AI could tell you something completely true, completely accurate, completely right from a magical objective morality and you STILL would not choose to believe it.


Given that we already have AI convincing people to kill themselves or that they are aware, I'd say the chance that a more advanced ai is capable of convincing humans to do certain actions that the human may not fully understand the totality of the consequences is pretty high.


And plenty of people are convinced of god by an "image" on toast. The existence of very gullible people does not imply that ANYONE is always convincible by a sufficiently "magical super-intelligent" "AI". People bring up "oh but think of it like your mental superiority to ants" and well, go ahead and convince a colony of ants to nuke russia, I'll wait.

There are already millions of people that are stupidly easy to convince of anything, so why aren't these magical end of the world scenarios already happening? Where's the evil maniac convincing Kelly the soccer mom who is into MLMs to end the world? Where is the supervillain raising an army of a million gullible idiots to invade Canada and have their own kingdom?

AGI "Superintelligence" even as a concept is just completely unsupported. It's people looking at an extremely cropped graph and saying "Line goes up, must go up forever" and wouldn't you know it that group has massive overlap with those MLM people from before.


And I think it's important to note that hype about "existential risk" is coming from people who stand to profit from "AI": https://www.cnbc.com/2023/05/31/ai-poses-human-extinction-ri...

So it seems like the playbook is something like: 1) play up non-problems that, thanks to 50+ years of sci-fi killer AI, will seem real to the rubes; 2) breeze on past the numerous actual problems; 3) solve as few actual problems as possible as they race to capture billions of dollars.

But happily, the contents of this press release focus mostly on actual problems, so I have some hope that the Skynet gambit won't work.


"the AI is going to kill us all, dear Senators! Nothing to see in this disgustingly immoral little corner of the world where I'm building a measly $100 million personal fortune!"


> people who stand to profit from "AI"

Some of them, sure, but that doesn't seem to be true of plenty of signatures to the open letter.

> the contents of this press release focus mostly on actual problems

Given that this is a set of entirely voluntary commitments made by the companies who you had thought were hyping x-risk in order to distract from actual problems... maybe it's worth updating your assessment of what the x-risk thing is about?


> maybe it's worth updating your assessment

Nope. Because I think the White House's press release is expressing the White House's view of what's important. That the White House didn't get distracted by the chaff says nothing about the chaff or those firing it off.


There's nothing in this release that companies didn't agree to as well. Which doesn't prove that they're fundamentally motivated by what's best for humanity but does maybe indicate that it wouldn't be hard for them to just stick to PR about shorter term safety concerns if no one working there (or externally) was genuinely concerned about longer term "x-risk" etc.


I see no reason to think this isn't them sticking to short-term PR concerns. For a while, the tech playbook with regulators is to say vague, positive things, grudgingly agree to some bare minimum, and then mostly ignore or sandbag on their commitments while lobbying aggressively behind the scenes.


Pretending to play along with "requests" to "keep AI safe" while ignoring the harms it will cause in other ways as the companies that own it try and gobble up literally all the money is 100% in line with the Sam Altman style of "I deserve to be the richest motherfucker on the planet" thinking


Do you think AI is capable of gobbling up "literally all the money" for its owners? If so, doesn't that suggest it's kinda dangerous?

FWIW I don't think these voluntary commitments are sufficient or address all of the important harms. But that's different than saying any government action is just "regulatory capture" and an attempt to keep open source models down. I'm only attempting to argue against the latter here, which I'm not sure is where you're coming from.


> Do you think AI is capable of gobbling up "literally all the money" for its owners?

Not particularly without barriers to competition in the space being artificially erected so as to enable that, which is the whole point of the industry-government game of footsie.


You're not the commenter I was addressing, but it sounds like your position is: AI will eat the entire economy but that's going to go fine as long as it's entirely unregulated so that the magic of the free market can fix any problems. Is that right? Or maybe you're not endorsing "literally all" in the literally-literally sense, in which case it would be helpful to spell out what value you think AI will capture (for monopolists or otherwise).


>AI is capable of gobbling up "literally all the money" for its owners?

No, I think wealthy and capital rich companies can leverage AI to extract value from places that already extract value simply by underpaying people and use stupid rhetoric and other methods that they are very familiar with to continue buying up basically everything as most normal people struggle to survive.

I explicitly do not think "AI" or even AGI has any inherent danger in itself, and only has danger in the same way any automation has in a capitalist system that also gives richer people more power in the justice system and the political system: Rich will get richer by squeezing the non-rich even harder, while politicians don't care because most of them genuinely believe in capitalism as an unabashed and incorruptible good and the misinformation and discourse control that passable bullshit generated with a single keystroke grants them way more power


Personally I don't think AI is very likely to destroy humanity in the next couple of decades. I also don't think that the Ukraine war is very likely to end in a global thermonuclear exchange. But I don't think worrying about nuclear war is insanity. What strikes me as insane in the Ukraine case, or at the very least dangerously irrational, is calling for a no-fly zone without thinking about the consequences. And I feel similarly about attitudes towards AI...


At the same time, I think it's pretty clear that a no fly zone could have been done with few American casualties, and Putin would still have no ability to respond because they physically cannot. Putin, and more importantly all the rich assholes under him who like being rich assholes, don't like the idea of being rich assholes in a post nuclear exchange Russia, will likely not hit the button. Add to that the likelihood that most of the Nukes would not go off, or blow up on the launch pad even (Most of Russia's nukes are liquid fueled ICBMs, which are terrible in terms of staying reliable for cheap, and Russia spends less on it's entire military than the US spends on just it's solid fueled missiles). Russia has had at least one nuclear test fail since the war started.

We have been holding back equipment because "what if we need it" (to fight who?), or "but they'll retaliate!" (with the one tank they could afford to spare for the parade?) or "but it costs so much money" (only when you calculate it with the price we paid in 1980 instead of the negative value these old machines that need to be sold or decommissioned currently have) or "We can't send cluster munitions because the UXO risk!" while every day Russia is still in Ukrainian territory is actual Russians purposely shooting at Ukrainians and RUSSIANS HAVE LITERALLY USED CLUSTER MUNITIONS ON CIVILIAN POPULATIONS!

The actual insanity is sending Ukraine 100 Bradleys instead of 500 and then bitching when their offensive is mediocre.


I don't want to argue about Ukraine in this thread but I guess I do respect the consistency of anyone who says both "eh risking nuclear war is no biggie if the first mechanism for it that occurs to me seems unlikely" and "eh risking losing control of the planet to AI shoggoths is no biggie if..."


AI safety is corporation safety, or government safety, or software safety

AI has the same powers and risks of organizations -- meta/mega human actors.


Corporations are arguably a form of AI, executing on top of human agents and the code of law: https://omniorthogonal.blogspot.com/2013/02/hostile-ai-youre...

The difference is that machine learning technology can potentially operates at orders of magnitude of higher speeds, and potentially without humans in the loop. (The threat model is less "flip the on switch and everybody immediately dies"; but "flip the on switch, it works so well it gets woven into our infrastructure", until a critical threshold is met, and some seemingly-innocuous Make Number Go Up algo leads to humanity having a Very Very Bad Day.)

So all the same perverse incentives (orthogonality) of human-org AIs, but iterating thousands or millions of times faster.


Strict regulations comes after lots of people die, not before.

Look at the regulations on nuclear bombs.


People don't even seem to understand that AI is inside of a computer. It's not real.

The whole argument with AI safety seems to be about how AI can say some "harmful misinformation". Ok. I can say harmful misinformation too, and I'm a real person with feelings that can be expressed to other people.

"AI safety" takes a view that people are inherently stupid, that people have no ability for complex thought and critical thinking.

To me this seems like AI companies such as OpenAI want to stop open-source and monopolize AI where you have to have a government certification to create AI models. Plus a bit of paranoia from the totalitarian liberals who only want consensus rather than debate.

At least there are valid concerns for training models on copyrighted works. Maybe we should focus much more on that rather than on "the AI said something that I don't like".


People always say "AGI" and they mean different things but I am guessing most now mean something like a digital living human/simulation of human when they say that. Mixed in with some blurry connotation of it automatically becoming godlike in power.

That stuff is very speculative. But we don't need AI to get to that level for it to be very dangerous. Just imagine that we have an open source GPT that is something like 33% smarter than GPT-4 and less brittle. Make the output 50 times faster than human thought and suppose that we can run it very inexpensively, in a "swarm" of agents cooperating.

Then you have a type of superintelligence that does not require any really speculative AI advance -- hyperspeed reasoning -- based on the current technology.

If that is widely deployed for military and industrial decision-making then something like a computer virus could create existential risk.

Also, realize that all these systems really need to emulate some of the core functional aspects of animals, like a self-preservation instinct or desire to control resources, is the right instruction and relaxation of guardrails. So when they can reason a bit better and at hyperspeed with collaboration between them, they don't need to be alive or anything to be dangerous.

Look at the history of increases in computing efficiency. It is easy to imagine 50 or whatever times output speed increase in less than 5 years. And even though they are not human level now, GPT-4 has proven that these types of systems can have strong problem solving ability.

One more thing to add is that as the AI performance increases and surpasses human decision-making ability, the hyperspeed will push competiting companies and countries to give them broader and broader goals and more autonomy. Because waiting overnight for human input means your competitor's AIs race ahead the equivalent of weeks.

I think we need to set a limit for the performance of new hardware at some point. Also we need people to understand the dangers of full autonomy and imitating animals in the context of hyperspeed or superintelligent AI. We will need criminal penalties and maybe some type of cooperative digital immune system.


I think you also need to draw a line between different types of danger:

An actual AGI, that can think and has it's own motives and initiative is an entirely different danger to GPT-4, -5 etc which cannot. Those might post a danger in terms of fake news, or worker displacement, but they're just social issues. No rate improvement in chat GPT will make it "decide to kill all the humans" etc.


I mean if you wire the output of GPT-x to a nuclear reactor it can cause all kinds of danger.

The thing that keeps us safe from computers is generally we don't let them have access to anything dangerous and when we do theres typically an evaluation of what can go wrong and how we'll prevent the computer from doing that (often in the other direction, we only program the computer to do X in Y situations). But sometimes that fails and you get the unexpected rapid acceleration of cars.


If (and it's a huge if that we are no where near) we created a super intelligent, general ai, how would you keep it from accessing things it shouldn't? It would be smarter than the world's best hacking team and work 24/7. That's sort of the deus ex machina of AI...


With a Hollywood style big red emergency stop button protected by plastic shield to prevent accidental pressing that is also protected by a lock where the person in charge has the key around their neck.

Or a giant knife switch on the wall that makes that huge clunk noise when turned to the off position, and everyone hears the sound effect of the power shutting down.

We've had computers in charge of so much for so long, but there's always a manual override to allow the operation without the computer. Manual valves to be turned (hopefully they haven't been rusted stuck so the timer on the countdown gets dramatically close to 0).


You can't fit a manual override to these systems any more than you can a person.


what? if a computer control is installed to a physical anything, you can always add cutoffs to the control at the physical point so that even if the computer is telling it to do something the physical thing no longer receives those controls/signals. designing it without those safety precautions would be 100% asinine. it'd be like allowing openai to design everything with no controls. even Musk allows the drivers to take control away when FSD is engaged.

you're bonkers if you think we can't make manual overrides like this.


Modern planes are purely computer controlled. I imagine the same is true of Teslas. And nuclear power plants and missile systems etc.


Huh? if this were true, the plane would never have been safely landed in the Hudson by a human pilot. yes, the majority of flights are flown by the computer autopilot because of the fuel efficiency, but the pilot can take control at any point.


The Hudson was a bird in an engine. The computer was working fine. Now imagine if you tell the computer to do X and it just ignores you. You have no actual control over the engines, flaps, etc. The computer does. And it ignores you...


Of course if you create some super intelligent, general ai, and put it into a robotic humanoid its going to have a lot of agency.

However, Chat-GPT is only able to interact with people because it was hooked up to the internet not because it figured out how to access the internet. It also didn't create a worm and infect the entire world, the second you leave the webpage you no longer have Chat-GPT.

The way to keep the AI from accessing it shouldn't is literally just don't hook it up to those things in the first place.


Yeah, chat gpt is in no way intelligent. People are using it (incorrectly) as a substitute for an actual general ai...


People have literally tried to hack together an agent on top of GTP-4 and given it an explicit goal of "kill all humans" for the lulz (ChaosGPT). I don't think that particular case is going anywhere, but assuming that nothing roughly GPT-shaped can acquire harmful goals seems naive.


Respectfully, you're mixing things again: you have described 2 separate unrelated things:

* A system like chat gpt that can do things a human cannot (find a way to kill all humans that's achievable by a small group).

* A system different to chatgpt that has its own motives and initiative.

Both need to be achieved.


I was responding to:

> No rate improvement in chat GPT will make it "decide to kill all the humans" etc.

All I'm trying to say is that GPT-style architectures not having "agency" shouldn't be very reassuring. How intelligent they can get is a separate question and one on which I am agnostic.


Every time I hear how artificial intelligence with its "own motives" etc. are dangerous, I wonder how it's different from just dealing with people and animals.

Or to put it another way, I can't see how artificial intelligence could be any more dangerous than intelligence as we already know and have them.


I think that’s the problem isn’t it? Intelligence has proven to be exceptionally dangerous!


I don't want to sound dismissive but... Well yeah.

We already know humans are dangerous. And that is humans who are conditioned to be social, and who have empathy and who rely on other humans for all sorts of things. Plus with fear of a legal system punishing them.

Now imagine a human like intelligence without empathy, that is not conditioned from childhood to be social, and that is inherently threatened by us and not reliant on us. Would you want that guy living next door?

Oh and it's 100x smarter than you.

I think all this is 100 years away, but that's the theory. Skynet doesn't have a subconscious telling it not to murder it's parents...


How dangerous can a dictator with complete knowledge and super human speed actually be?


>That stuff is very speculative. But we don't need AI to get to that level for it to be very dangerous. Just imagine that we have an open source GPT that is something like 33% smarter than GPT-4 and less brittle. Make the output 50 times faster than human thought and suppose that we can run it very inexpensively, in a "swarm" of agents cooperating. Then you have a type of superintelligence

No. Then you have something that can spout plausible sounding nonsense mixed in with facts gleaned from crawling the internet.

There are some dangers to this - don't get me wrong. It could be used to create bots which engage in much more sophisticated social media manipulation on a large scale, for instance.


Kind of like how the White House secured commitments from banks to manage systemic financial risks?


Another fine case of regulatory capture!

https://en.wikipedia.org/wiki/Regulatory_capture


The big banks were pretty stable in this last go around, and systemically it never really felt like there was much risk? So maybe some things are working?


>The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system.

I've seen some papers on watermarking, but none of them are what I'd call "robust" - they're easily defeated by making small changes to the data. Are the companies "committing" to overcome an unsolved (and possibly unsolvable) technical challenge?


This strikes me as the best outcome as long as it stops here. This lets the White House move on from the nonsense AI x-risk discussion without being attacked for not doing anything about it, and also doesn't let the companies use fake x-risk to kill off open source models.


Yes, let's hope it stops here. I have full faith in billionaires looking out for our best interests as a society


I'm a cynic but even so I've been surprised by how the tone of confident dismissal of x-risk as "nonsense" has been completely unaffected by the change in messenger from LW weirdos to Turning Award winners. Like people don't even feel the need to gesture at an argument, it's just supposed to be obvious. What's really going on is good open source vs evil corporate control, a lot of commenters are sure, and the government is just out-of-touch geriatric puppets—even as the "open source vs Micro$oft" narrative has been ported straight from the 90s to a world where the concrete policy question is "should Meta bear any responsibility for the consequences of their software releases."


Why should the tone have changed? The billionaires and AI godfathers talking about it now all were inspired by LW and scifi, and it's the same narrative: no need to care about people living today or real problems in the world when there's a scifi story about trillions of people in the future for whom we must sacrifice anything (except the billionaires and AI godfathers comfort, status, or money) in the present to protect.


> "all were inspired by LW"

Evidence?

> "no need to care about people living today or real problems in the world"

Nobody says this.

> "trillions of people in the future"

Weird longtermists who talk about this are a small minority even among AI doomers.

> "AI godfathers comfort, status, or money"

I'm not suggesting anyone feel sorry for Geoff Hinton but quitting your Google job does seem like a pretty obvious sacrifice of money.

But I dunno you packed so much confusion into two sentences that I doubt you're going to let any messy details get in the way of picking a side based on an attempt at populism (including the fact that you're aligning yourself with Marc Andreesen and Zuckerberg).


Eh, one can dislike the longtermists and the non-longtermist plutocrats. I'd say there's good reason to dislike both, but for different reasons.


But it doesn't address the actual, real risks. It looks to me like they're just talking about the fake ones.


I agree with you, but it isn't going to stop here:

> Biden-Harris Administration will continue to take decisive action by developing an Executive Order and pursuing bipartisan legislation to keep Americans safe


Ah the lengths they will go for our safety.


I feel like this is one of those things that belongs in those news clips you see at the beginning of a dystopian or post-apocalyptic sci-fi film. It's just vague PR that won't actually do anything of substance.


Peace in our time is guaranteed. Neville Chamberlain would be proud


Companies commit to testing systems - well, what about the commitment to actual fix problems?

Would have been nice to see that. Even right now there are known security vulnerabilities that were found through testing but haven’t been fixed.


This reeks of ass-covering so we everyone can later say "We are sorry. We tried." followed by some more innocuous PR lingo

Great for voters to sleep better at night. Useless for actually "managing AI risks"

Not that those risks can seriously be managed long-term, IMHO. The prisoner's dilemma between nation states and corporations ensures someone will defect.


Right. Check out the new Netflix documentary "Unknown: Killer Drones". Basically former soldiers running military tech companies with the experience of losing their friends in battle are looking at how DeepMind's AIs are undefeatable at Star Craft II, Go, and pretty much every game they train it for. And already testing plugging that type of thing into drones and jets.

And now DeepMind wants to combine the language-based fairly general purpose reasoning ability of something like GPT with the superhuman strategic prowess of AlphaStar/AlphaZero/etc.

At the latest big keynote Nvidia touted the fact that they have accelerated AI by a factor of one million over the last decade, and project that they will do so again in the next decade.

Humans will not be able to compete at all, and even putting them in the decision loop will mean immediate failure. Human thought and action will "appear" to be so slow that it is essentially frozen compared to the operating speed of these systems.


Call me cynical but its hard to see this as anything but a way to create a moat for the incumbent leaders in the field.


We need libre and open source to move faster.


How? Where can we donate? What can we do?


Making the wolf the shepherd. What could possibly go wrong.


This actually seems semi decent for a first go?

140 million in funding for seven more research labs (might be a different order but still neat).

Commitments for cybersecurity funding (probably already happening anyways), independent auditing, and some testing.

The most interesting part was the watermarking. I'm interested to see how that works out, but it's a neat concept, something I hadn't thought of.

I like the focus on reducing bias, that seems like a good effort and of course the focus on helping to cure cancer which I'm always a fan of Biden being such a champion of.

Also reaching out to key allies to work together seems positive.

I dunno, all in all, it seems kind of neat. Call me a wild eyed optimist or something.


No bonding, undefined, unmeasurable "commitments"...


[flagged]


It makes more sense when you realize AI Safety is about the safety of the companies making AI from competitors.


Do you have a better physical world analogy than a "series of tubes" to describe bandwidth and latency?

https://www.pcmag.com/news/a-remembrance-and-defense-of-ted-...

Comment in context:

"They want to deliver vast amounts of information over the Internet. And again, the Internet is not something that you just dump something on. It's not a big truck. It's a series of tubes. And if you don't understand, those tubes can be filled and if they are filled, when you put your message in, it gets in line and it's going to be delayed by anyone that puts into that tube enormous amounts of material, enormous amounts of material."

https://en.wikipedia.org/wiki/Series_of_tubes


Harris is our AI “czar”. She is known to be generally “not the brightest person”. Which experiences/competence makes her AI czar? Imagine someone like that having a say on how country develops nuclear at the end of ww2.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: