Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Regarding 3), check out the fact that OpenAI, DeepMind, and other top labs have AI safety programs and people working on AI Alignment. Interviews by Sam Altman, Ilya Sutskever, and others confirm their concerns.

Here’s an article by Prof Russell, AAAI Fellow and a co-author of the standard AI text: https://www.technologyreview.com/2016/11/02/156285/yes-we-ar...

Regarding 1) and 2), we might as well not succeed. But would you propose that we sit still and do nothing if many experts say that there is even a 20% chance that a superhuman alien species will arrive on earth in 5-25 years and we do not know about their intentions?

A survey of AI experts well before GPT-4 shows that nearly half of them have such concerns (with varying timelines and probabilities).

By the way, calling a proposal by Prof Stuart Russell and several other top AI experts “dumb” should require a much stronger argument and level of evidence than you have shown.



An idea may be dumb regardless of who believes it. You will find history littered with such ideas.


I re-read your comment and it was clearer, so I edited the response accordingly.

Please also respond to the main arguments I gave and linked to if you can.


Oppenheimer at one point believed that there was some possibility the atomic bomb would set the atmosphere on fire and kill all humans. However, at least that particular fear was falsifiable. Other physicists ran calculations and concluded it was impossible.

Do these beliefs about the dangerousness of AI possess even that quality? Are they falsifiable? No.

These arguments are begging the question. They assume as a given something which cannot be disproven and thus are pure statements of belief.


Lack of falsifiability (even if it’s true in this case, which is not a given) is not a license for inaction.

The world is not a science experiment.

And we know that it’s plausible the emergence of Homo Sapiens helped cause the extinction of Neanderthals.


Problem is, AFAIK the math tells us rather unambiguously that AI alignment is a real problem, and safe AI is a very, very tiny point in the space of possible AIs. So it's the other way around: it's like scientists calculated six ways to Sunday that the hydrogen bomb test will ignite the atmosphere, and Oppenheimer calling it sci-fi nonsense and proceeding anyway.


Prof. Russell hasn't provided any actual evidence to support his dumb proposal. So it can be dismissed out of hand.


We have significant evidence that suggests that it's quite plausible the emergence of Homo Sapiens helped cause the extinction of Neanderthals.


And?


Current AI is already smarter than some people. Many experts believe it will be smarter than nearly all or all humans. AI can inherently spread and communicate much faster than us. Without AI Alignment, we could be like Neanderthals.

https://news.ycombinator.com/item?id=35370033


Bullshit. Current AI can score higher than some dumber humans on a limited set of arbitrary tests. So what.

There are no actual "experts" in this field because no one actually knows how to build a human-equivalent artificial general intelligence. It's just a bunch of attention seeking grifters making wild claims with no real scientific basis.


Try using GPT-4 to code something on a regular basis. Try teaching an average human to code better than it does.

Or perhaps check out and follow Ethan Mollick’s twitter: https://mobile.twitter.com/emollick. He’s a Wharton professor who has been using GPT-4 to do many kinds of challenging tasks.

There is likely no fundamental difference between below average humans and smarter ones. The differences are mostly just results of differing thought patterns at different layers of abstraction, habits of thoughts, and size of working memory.

There are good reasons to believe AGI is only a couple key ideas away from current AI, so current expertise is relevant.

I won’t discuss further since it won’t matter until you try the above for some time.


Yes, I've used GPT-4 as you described. None of that supports your point. There is no reason to think AGI is near. You're just making things up and clearly don't understand the basics of how this stuff works.


I know about the architecture and the inner workings of GPT-2 and GPT-3 models, as well as the math of transformers. No one outside of OpenAI knows exactly how GPT-4 works.

And I have not been talking about the risk of GPT-4 but later models which could use a different architecture.

I have also taught people to code and solve challenge math problems.

(It seems you are so confident you know more about AI and human cognition than pretty much anyone among the 1000+ people who signed the petition, including 2 Turing Award winners, >10 AAAI Fellows, as well as many engineers and avid practitioners.)

I hope you’ll notice how similar the behaviors of some cognitive mechanisms of these models are to human cognition.

An airplane can fly, just with a different manner & using different mechanisms from a bird.

Are you 100% confident we will not have AGI in the next 5-10 years. What would you bet on that?

I’ll stop here.


An important fact of Sam Altman's personality is that he owns a New Zealand apocalypse bunker and has for a long time before OpenAI, so he's just an unusually paranoid person.

(And of course owns two McLarens.)


Are the McLaren’s in the bunker?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: