Regarding 3), check out the fact that OpenAI, DeepMind, and other top labs have AI safety programs and people working on AI Alignment. Interviews by Sam Altman, Ilya Sutskever, and others confirm their concerns.
Regarding 1) and 2), we might as well not succeed. But would you propose that we sit still and do nothing if many experts say that there is even a 20% chance that a superhuman alien species will arrive on earth in 5-25 years and we do not know about their intentions?
A survey of AI experts well before GPT-4 shows that nearly half of them have such concerns (with varying timelines and probabilities).
By the way, calling a proposal by Prof Stuart Russell and several other top AI experts “dumb” should require a much stronger argument and level of evidence than you have shown.
Oppenheimer at one point believed that there was some possibility the atomic bomb would set the atmosphere on fire and kill all humans. However, at least that particular fear was falsifiable. Other physicists ran calculations and concluded it was impossible.
Do these beliefs about the dangerousness of AI possess even that quality? Are they falsifiable? No.
These arguments are begging the question. They assume as a given something which cannot be disproven and thus are pure statements of belief.
Problem is, AFAIK the math tells us rather unambiguously that AI alignment is a real problem, and safe AI is a very, very tiny point in the space of possible AIs. So it's the other way around: it's like scientists calculated six ways to Sunday that the hydrogen bomb test will ignite the atmosphere, and Oppenheimer calling it sci-fi nonsense and proceeding anyway.
Current AI is already smarter than some people. Many experts believe it will be smarter than nearly all or all humans. AI can inherently spread and communicate much faster than us. Without AI Alignment, we could be like Neanderthals.
Bullshit. Current AI can score higher than some dumber humans on a limited set of arbitrary tests. So what.
There are no actual "experts" in this field because no one actually knows how to build a human-equivalent artificial general intelligence. It's just a bunch of attention seeking grifters making wild claims with no real scientific basis.
Try using GPT-4 to code something on a regular basis.
Try teaching an average human to code better than it does.
Or perhaps check out and follow Ethan Mollick’s twitter: https://mobile.twitter.com/emollick. He’s a Wharton professor who has been using GPT-4 to do many kinds of challenging tasks.
There is likely no fundamental difference between below average humans and smarter ones. The differences are mostly just results of differing thought patterns at different layers of abstraction, habits of thoughts, and size of working memory.
There are good reasons to believe AGI is only a couple key ideas away from current AI, so current expertise is relevant.
I won’t discuss further since it won’t matter until you try the above for some time.
Yes, I've used GPT-4 as you described. None of that supports your point. There is no reason to think AGI is near. You're just making things up and clearly don't understand the basics of how this stuff works.
I know about the architecture and the inner workings of GPT-2 and GPT-3 models, as well as the math of transformers. No one outside of OpenAI knows exactly how GPT-4 works.
And I have not been talking about the risk of GPT-4 but later models which could use a different architecture.
I have also taught people to code and solve challenge math problems.
(It seems you are so confident you know more about AI and human cognition than pretty much anyone among the 1000+ people who signed the petition, including 2 Turing Award winners, >10 AAAI Fellows, as well as many engineers and avid practitioners.)
I hope you’ll notice how similar the behaviors of some cognitive mechanisms of these models are to human cognition.
An airplane can fly, just with a different manner & using different mechanisms from a bird.
Are you 100% confident we will not have AGI in the next 5-10 years. What would you bet on that?
An important fact of Sam Altman's personality is that he owns a New Zealand apocalypse bunker and has for a long time before OpenAI, so he's just an unusually paranoid person.
Here’s an article by Prof Russell, AAAI Fellow and a co-author of the standard AI text: https://www.technologyreview.com/2016/11/02/156285/yes-we-ar...
Regarding 1) and 2), we might as well not succeed. But would you propose that we sit still and do nothing if many experts say that there is even a 20% chance that a superhuman alien species will arrive on earth in 5-25 years and we do not know about their intentions?
A survey of AI experts well before GPT-4 shows that nearly half of them have such concerns (with varying timelines and probabilities).
By the way, calling a proposal by Prof Stuart Russell and several other top AI experts “dumb” should require a much stronger argument and level of evidence than you have shown.