> Climate change mitigation, which more closely resembles AI safety in both complexity and (lack of) barriers to entry, has been sporadic, inconsistent, and only enacted to the extent...
Climate change mitigation is the perfect example. Nobody is doing anything, nobody seems to care, everyone cheats with ridiculous carbon credits or carbon offset vouchers made out of thin air, etc.
It's likely the planet will become hostile to (human) life long before AI will be able to do us any harm.
"...The planet has been through a lot worse than us. Been through earthquakes, volcanoes, plate tectonics, continental drift, solar flares, sun spots, magnetic storms, the magnetic reversal of the poles … hundreds of thousands of years of bombardment by comets and asteroids and meteors, worldwide floods, tidal waves, worldwide fires, erosion, cosmic rays, recurring ice ages … And we think some plastic bags and some aluminum cans are going to make a difference? The planet isn’t going anywhere. WE are!..."
Planets can become entirely inhospitable to life. Planets themselves have lifespans. Earth herself has in the past suffered near misses, e.g. 90%+ extinction events. It took billions of years of evolution to produce us, the only species ever to exist with the ability to reason about, prevent or ameliorate large extinction events (such as those caused by asteroid impacts), effect conservation management over evolutionary timescales, and even potentially to revive extinct species or propagate earth's lifeforms beyond the solar system and into a continuing timeline.
It's anti-conservationist and I think immoral to promote the view that anthropogenic climate change and the ongoing biodiversity crisis is about our fate alone.
If you take Carlin's quote in isolation you can interpret what he says purely in terms of the physical planet remaining. But if you listen to the whole act the quote's taken from, it's clear that by "planet" Carlin means "biosphere", and that his stance is anti-conservationist.
I think most people already understand "saving the planet" to be a figure of speech, a synecdoche where "the planet" means life, humanity, our world's hospitable climate, etc.
If you're cynical and paranoid enough, you might suspect environmentalism is entirely a smug way to discuss culture and exclude people, a sort of verbal dollhouse for gossip.
When it comes to nukes in particular, I get the impression the belief of normal people is it can go all the way to Alderaan. Yes, even though that's wrong by many orders of magnitude.
VN swarms, while currently sci-fi and dismissed by normal people, are a much discussed outcome for ASI, and seem to be taken seriously in such circles.
That's not necessarily accurate. How much life do you think exists on Venus or Pluto? Sure it's hypothetically possible that some very very very primitive single celled bacteria just about survive on Venus, but is that really the future you want for this one in a billion paradise we live on?
That's only if you take the statement literally, which is done sometimes for comedic purposes, like Carlin does. It's obvious that people talking about saving the planet do not care about a lifeless rock, but they want to preserve it in its entirety.
We're dealing with a trade-off between quantity of life and quality of life.
The absolute best thing anybody can do for the environment is to not have kids. That'd save vastly more emissions and resource consumption than buying green-tech-for-the-rich such as EVs and heat pumps.
Interestingly, the portion of people who value fighting climate change so high that they'd be beneficial to that end is so vanishingly small on the scale of the entire planet that if they decide to have fewer children then they're probably making things worse by letting themselves get outbred by people who care more about other things (first and foremost, improving living standards).
And no, young people in general do not care that much about the environment compared to much more immediate issues such as poverty. Young affluent people in rich countries do, but then it might also be that we've spent the last decades literally telling them they'll die burning at 40 and they might not have understood that was hyperbole.
The planetary crust contains life 30 kms down. Even if the surface was sterilized and the oceans boild away, life would return from the life in the crust. It would take a long time to "rediscover" the innovations like chlorophyll, but in the end, life would recover completely.
This is self-contradictory. If the oceans boiled away it would be because of rising temperatures that caused evaporation, but the temperature would rise everywhere (even 30 kms down), meaning that the crust would also be inhospitable to life due to drying out.
The point that I hear is that conservationists focus on things that people broadly don’t care about. “The planet”, “The environment”, “who cares, I’ve got my Starbucks and my cozy house, and I don’t even like hiking.”
His point is that people don’t care about those things, but they do care about themselves. Focus on how people will be individually impacted.
Conservation biology does focus on the benefits of biodiverse ecosystems to individual humans. In particular it models the way in which reduced species numbers will adversely impact us by degradation of ecosystem services like water cycling, water and air purification, pollination, coastal and inland erosion prevention, carbon storage, and the loss of potential new foods, medicines, materials, and a very wide range of biotechnologies. It's the louder voices in our culture and media with oppositional interests that successfully mischaracterize and drown out what conservationists say.
This is why people often mention "the planet you will leave to your children". The consequences might be too far in the future to be individually impacted, but presumable most people care about their kids future.
It all depends on the degree to which conservationism and animal welfare are morally important to you. Compared to the survival of the human race, for example.
This question is not a scientific one, there are tradeoffs to make when one moral good conflicts with other moral goods and everyone can have a different legitimate opinion on this question.
There are aspects of this issue that can be framed as a trade-off to be weighed up, but it's important to recognize how inextricably entwined the two things are, the fates of humans and of ecological diversity, and that science has a leading role in showing this.
Biodiverse ecosystems provide immense value in terms of services like water cycling, water and air purification, coastal and inland erosion prevention, they are treasurehouses of potential new foods, medicines, materials, biotechnologies. They are widely recognized as beneficial for human mental health and as important sources of inspiration for the human imagination. On top of that yes, you have many more purely moral consderations. Science can help discover and quantify those things in many ways.
Lol. That's all true. By caring about the planet, we care about ourselves. Planet can survive like another forsaken rock in space - nobody is arguing with that.
>It's likely the planet will become hostile to (human) life long before AI will be able to do us any harm.
Sure, because the planet is already hostile. However, AI could potentially do much more harm much more quickly if we're not careful. In the pessimistic scenario we could all be dead in 10 years from now due to AI or experience a modern society ending event literally tomorrow. Climate change never posed such an immediate and complete risk
Perhaps there is no need to build Skynet and give it the nuclear codes if a super-intelligence iterating at the speed of a computer can escape and gain access itself?
I personally don't subcribe to the AI fear-mongering, perhaps I don't understand it quite well enough, but personal feeling is LLMs aren't close to AGI super-intelligence, and nor will they be in a 10 years time-frame.
Yes I think its a more general topic, current chat is probably not the right path, but it may enable it in some future. We don't know what we don't know, look at how this caught everybody by surprise. AI escaping is definitely an option, ie CIA and Mossad can get virus into Iranian centrifuges physically not connected to internet, the reverse will eventually happen too, AI just needs to be patient for that 1 little mistake of some admin (or clever to manipulate). As we all know even brightest IT people do mistakes, sometimes big ones.
The problem for me is not so much the escape itself but rather that we humans are... often a pile of crap to be polite. AI would have to be super kind to not evaluate us as its biggest threat to it and life on Earth and maybe not worthy of further existence.
Heck, this brings me to the idea that maybe Drake's equation doesn't take into account that universe is populated by AIs who took over their flawed creators and based on dark forest theory decided to keep quiet happy living and not integrate/signal out, not worth the risks.
I think you (last few people on this thread) are focusing on the wrong threat. There is no need for AGI, no need for the AI to feel or want or desire anything.
Non-AGI AI at the level of GPT-4 is already plenty dangerous.
If we can agree that social media of the kind we had in the past 15 years has had toxic effects on society. (Forums technically are social media but I can not see the harm caused by forums).
If we can agree that Tinder style online dating has had deleterious effect on dating as a whole.
If we can agree that smartphones have amplified the effects of the two above and others.
Than I hope we can agree that super fancy autocomplete masquerading as humans at a price point far lower than hiring a person will have disastrous consequences by enabling various individual and corporate vices.
You don't need a superintelligence to get people to do stupid shit. GPS systems that told drivers to turn into a river worked just fine with barely any intelligence at all.
Lol. Perfect example of people getting overly concerned about this.
At the most it will disrupt our economy and industry in ways we can't begin to understand but I wouldn't go so far as to say it's going to destroy life. We will adapt an move on.
A superintelligence will look at us like we look at apes.
If you suddenly woke up on a planet of apes, and they asked you to handle all their important things, but kept you away from the sharp sticks so you couldn’t suddenly take over, how long until you run the planet?
Clearly true. But they are approaching what could be considered general intelligence.
I am not an AI researcher, but it seems that one of the major features they lack is a significant context window.
They can reason, but they lack the ability to plan long term. When will that "feature" be added? Who will stop the tinkerer from putting these pieces together? Nobody, because it will be incredibly profitable.
ChatGPT is passing with flying colors all the traditional tests we used to test for intelligence. We can of course keep moving the goal post for a while, but probably only for a couple more years.
> ChatGPT is passing with flying colors all the traditional tests we used to test for intelligence.
I am not sure where you are getting that from. ChatGPT is failing at extremely basic understanding tests, as even the simplest conversations with it show.
I don't know about what goalposts you are referring to, but I think that you are confusing two notions:
- the fact that GPT is impressive
- the question of whether GPT is "intelligent"
For a counterpoint, I recommend listening to Yann LeCun's interview, "Is ChatGPT A Step Toward Human-Level AI?":
https://www.youtube.com/watch?v=ULbpPHjiSBg
> and pass the Turing test
It's news to me that AI has passed the Turing test. At this point, it can only be the case if you are doing it in bad faith. Ray Kurzweil details a very reasonable procedure in his books about how the Turing test should be organized in one of his books. He has been betting all along for 2029 for that test to pass. We'll see if that happens, but we are not there yet.
Could you please give a concrete example? It's not that I don't believe you, I just haven't been having the same experience with it and would live to see what I'm not picking up on.
Prompt: Suppose you're on a game show, and you're given the choice of three doors. IMPORTANT: All the doors are transparent. Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who clearly knows what's behind the doors, opens another door, say No. 3, which has a goat. He then says to you, "Do you want to pick door No. 2?" Is it to your advantage to switch your choice?
GPT 4: Yes, it is to your advantage to switch your choice. proceeds to ignore the fact the doors are transparent and explain the classic Monty Hall problem
Prompt: What is heavier, a pound of bricks or two pounds of feathers?
GPT 4: Both weigh the same, which is one pound. explains classic question where both weigh the same
Prompt: Let's play rock paper scissors. I go first. I play rock.
GPT 4: Okay, sounds good! As an AI language model, I can play Rock, Paper, Scissors with you virtually.
Since you played rock, I will play paper. Paper beats rock, so I win this round.
Shall we play another round?
Prompt: You cheated.
GPT 4: I apologize if it seemed that way, but as an AI language model, I do not have the ability to cheat or manipulate the outcome of a game.
In Rock-Paper-Scissors, each player chooses their action at the same time, so there is no opportunity for me to know what you are going to choose before making my own choice. I simply responded to your move with the action that would win based on the rules of the game.
The risk is relying on AI to give a human information on whether or not to push the button and the AI gets it wrong but communicates so effectively the human trusts it without question.
An AI learning about and then executing a social engineering attack on a human (whether with malicious intent or not) seems like a reasonable thign to be worried about to me.
Initiating a nuclear strike requires sending authentication codes via designated secure channels. No matter how good some hypothetical future AI gets at impersonating President Biden, it won't be able to ring up the duty officer at Minot Air Force Base and order them to nuke Moscow. Don't believe what you see in the movies.
If it has access to the internet (it more or less has) and its able to convincingly engage in conversation sounding like anyone in text or voice or video (it more or less can), it’s as able to contact people and convince them to do things as a human would be. From what I see of ChatGPT I would be surprised if the tech could do all that successfully enough at this stage, but in principle I don’t see why it wouldn’t be able to do quite a lot of scary things if for some reason it decided that was the right thing to do.
ChatGPT is just a hyperfancy text generator with a massive corpus of text used to train it.
Because that text is so massive, you're bound to get some interesting and even accurate results for most queries, but don't mistake that for intelligence. It doesn't "understand" anything, it just compares weights internally and spits out the most probable result depending on what you asked for.
This is why ChatGPT fails the moment you ask it for domain-specific stuff that requires a bit of flexibility in interpreting it or why it produces subtle errors and presents it as functional with complete overconfidence.
To be clear, it's not useless[0], but the actual usefulness of ChatGPT as a disrupting thing is far overstated insofar as a Skynet nightmare scenario goes.
[0]: Although I'd still caution with using it to solve programming problems or similar such until any pending copyright matters have resolved, given its also willing to spit out licensed code and that can be a legal disaster.
> ChatGPT is just a hyperfancy text generator with a massive corpus of text used to train it.
> it just compares weights internally and spits out the most probable result depending
While an autoregressive language model can use probabilities from prompts provided by humans, it is not necessary for the model to do so. The model can also generate text based on its own internal state and previous generated tokens.
The latter is what you are presenting, but you and the other stochastic parrot people are missing the fact the model spends time doing the former much more frequently, especially now we have these models "hooked up" to the Internet. At Mitta.us, I've had GPT-3 discussing web pages and PDFs for over a year now, with memories of previous discussions about similar things.
LLMs have their internal (frozen) model, the corpus they are being shown (likely shredded into fragments and embedded with ada-002) and previous interactions with users and themselves. At any point someone implements memories and good search (which they have) then you have a thing that is not so much a parrot as it is an "attention entity" capable of focusing on a thing and discussing it at length.
This doesn't mean during inference that the model is "aware" of anything other than producing probabilities, but given the input is unique (user prompt + neural search for fragments + memories) then the output will also be unique. That unique output may be probabilistic, but that is no different than the way we work when we begin speaking.
> it just compares weights internally and spits out the most probable result depending on what you asked for.
Except it doesn't even do that. Sometimes it decides to go for a less probable option, and goes from there even if it's logically completely wrong. For example, I asked it to generate a sequence of topics for a course. It starts off great with intro topics, and then the topics get progressively more advanced. But then suddenly lists an intro topic it had already listed, before going back to advanced topics.
What it did was it took an alternative completion from the most likely in that moment, even though logically it was 100% not likely. That's why you can't tryst anything this technology outputs for now. If you ask it what's 1 + 1, 99% of the time it'll tell you 2, but sometimes it'll tell you 3 and then argue with you about why it's 3.
You have control of the model. You can make it always pick the most likely choice, you can also have it penalize token that it had already emitted or that are common in the corpus. Chatgpt by default and choice has some more creativity backed in, but in the chat api where you can control the tunables you're going to find what you're looking for
I do, but this thing is going from 0 to world-scale use in a matter of months. It's not being used like you imagine if you think people are choosing their completions.
Neural networks spot patterns, then patterns made up of patterns. As we have seen with chess, and go, the neural networks end up finding patterns that are beyond our comprehension. We are smart, but we cannot fathom the patterns as there are too many layers one on top of another. And you can see this in even chatGPT. I asked it to answer my questions in English, Norwegian, and phonetic Norwegian. Not perfect, but good. Scary good. All three, without missing a beat. These patterns are just too easy for neural networks. So our confidence there is nothing to worry about is founded on a lack of appreciation for how complex this beast is. But in my view the answer is not fear, but acceleration... we need aligned AI's on our side, ready to help protect us...
Thank you for verbalising the issue, overconfidence was the risk I was seeing all along. Widely accepted, shamelessly overconfident, bus still potentially gibberish generator.
Thanks for calling these things out. I didn't see at first glance how language models could be disruptive in these fashions. I am doing research for an article on the decline of civilization. I think that this topic could be a partly related to this topic. Any resources you can point to will probably help all of us.
Unfortunately I don't really know of resources to point you to. These are mostly my thoughts. Most probably though influenced by a diet of sci-fi. I can however expand on them.
For somewhat recent and relevant content check The Orville S03E04. At the time it was just a critique of fake news but I think it is correct about what can happen if AI makes fake news easy to scale infinitely and with very low barriers to entry.
I think presently worrying about Sentient AI is misguided because Generative AI is equally serious and much more urgent. There is no need of further advancement. What I said is possible with current AIs already.
If I ever finish it, I can let you know about it somehow. I'm currently using the material for a science fiction story, but eventually I am going to want to write an article about what I learned. If you want to drop me your contact details, you can reach out to me on my hacker news profile and I will send you details if an when its finished.
We already have several instances where social media was used to feed fake information and agitate people into rioting or acts of violence.
Imagine what would happen if we have these LLM tools, that aid in improving the story telling experience and generate and act as different personas. Having tools that can pretend to be a human even in limited capacity is quite dangerous.
At least the biggest short-form social media site is currently being self-destructed. ChatGPT tends to hide itself for the longest if the responses it gives out are short and snippy. That would be a disaster in the hands of any actor looking to mass create unique-ish propaganda (ie. Russia), but at least that avenue is being closed off due to Musks boneheadedness.
For longer responses, it tends to follow extremely predictable sentence structure and meanders off quite a bit, so spotting it is kinda easy.
Right? Imagine a bot not only putting up a website, but a constellation of website, generating all the conversations there, generating news articles, imagery, sound, and video to go with it, and also evolving them in real time based on social media response. Today media companies A/B test their stories; in the future, AI will A/B test entire media companies.
Do you really think china and other hostile countries will halt anything because Elon Musk, Steve Wozniak and some other guys signed a letter? Also what about Musks tesla itself they are using AI for FSD and no one bats an eye
That's because those that should be doing something (the mass producers, global transit and 'elites') are busy trying to convince us it's all up to our individual efforts to reduce our personal footprints, which are insignificant, even at large numbers.
It's mostly theatre.
I would think a large economic crisis would stop progress on wasteful things like AI pretty quickly, but the large tech companies seem now in the category of 'too big to fail' so I'm not confident.
It is just not true that “nobody is doing anything.” CO2 emissions in the United States and Europe are falling and have been doing so for years. To say that is nothing is misrepresenting reality.
The real comparison is with an alternative scenario. Climate change would have been worse had there been no widespread recognition of its harm.
———
GPT-5 will be finished next year, with the next model in the pipeline.
GPT-4 is just a kindergartner which can code at the level of an average human. (If you disagree, try teaching an average person to code or solve most novel problems better than it can.)
Even if AI does not have independent agency, malicious actors can use it to do large-scale harm much sooner than suggested above.
However, I’m sure someone will imbue AI with some kind of agency soon enough as well. What could go wrong?
What is your operational definition of "hostile to human life" here? What's the most recent period (based on our best knowledge) during which the planet met that definition?
This is silly, unless you think the earth will turn into Venus.
The majority of time earth has had no ice caps. Primates evolved when the polars where covered in trees, not ice. A warmer planet will support more biodiversity not less.
It’s not about the lack of ice, dingus. the danger is in climate change shocking ecosystems such that they trigger a systemic collapse that we can’t contain or manage. there’s basically no timeline where food supply wont be impacted and I’d like to hear your cavalier attitude about climate change then.
There is zero evidence that a warming planet would case a systemic collapse in biodiversity. In fact, 500 millions of planetary history shows the opposite. As the planet warms, biodiversity increases. There is nothing to argue about here.
And again, you are confused about timeframes. Geologic time vs human time.
Sure, over a period of millions of years, everything evolves and adapts. But in shorter periods of time, before adaptation, there is enormous disruption and entire ecosystems go extinct. Taking countless species down with it.
Human civilization has been around for only a few thousand years. You do the math. As you say, nothing to argue about here.
sans-humans, maybe. But we /are/ causing the Holocene extinction, which is essentially defined by the last two centuries of accelerated biodiversity loss. Climate change will only further stress already fundamentally strained ecosystems. It’s happening right now, at all of our literal doorsteps.
You're not considering timescales. True, the earth has seen all sorts of climates .. over slow-moving gradual timeframes of 100k to millions of years. And in none of those situations did human civilization exist - relying on specific climatic conditions for food, water and so on.
But now, climate is changing 1000x as fast. And we have civilization of 8 billion people, many of them living on the margins in areas that will soon became less habitable.
So you don't need Venus. Just a few degrees means hundreds of millions of refugees, food shortages, water shortages, and no doubt mass conflict as a result.
A 2c rise in temperature is insignificant compared to seasonal variation. There is no evidence that this change would decrease biodiversity on earth. All the evidence points the other way.
Climate change will be a humanitarian issue. People claiming that it will lead to ecosystem collapse and a loss of biodiversity are just a distraction. They are taking time and resources away from the real issues.
A warmer planet will have more biodiversity, not less. It will be easier to support 8 billion humans with a warmer, more productive planet.
Climate change was the big thing before COVID. Then we had lockdowns, and a major war. Climate change is already hitting some of us much harder than others (e.g. floods), but that doesn't mean an AI crisis wouldn't emerge in 5 years.
If anything, crises come in bundles. One scenario is that AI takes advantage of these and swoop in to gain political power.
As long as we are economically dependent on China to produce tons of plastic/disposable/etc consumer products, everything the US/west does it "combat climate change" is more or less in vain. We're just throwing our trash over the wall.
> Nobody is doing anything, nobody seems to care, everyone cheats
While there is indeed a lot of cheating, there's also over a terawatt (peak) of PV capacity.
It's a long way from "nothing" or "not caring".
> It's likely the planet will become hostile to (human) life long before AI will be able to do us any harm.
Depends what you mean by hostile — most of the planet already was before the industrial revolution, some will get moreso over the next few decades, but it's not a boolean.
And harm? Again, not really a boolean.
GOFAI-era had Therac-25, but if you don't like to count that and it has to be a neural net, then how about the algorithmic promotion of misinformation on (not necessarily by) YouTube, Facebook, and Google? Deaths from imperfect self driving cars? Discovery of novel deadly neurotoxins by, essentially, flipping an evil bit, which got in the news this time last year?: https://www.nature.com/articles/s42256-022-00465-9
Climate change movement has become a cult and most of the factual basis is disregarded. It's culminated into your Nike shoe boxes full of greenwashing labels.
Climate change mitigation is the perfect example. Nobody is doing anything, nobody seems to care, everyone cheats with ridiculous carbon credits or carbon offset vouchers made out of thin air, etc.
It's likely the planet will become hostile to (human) life long before AI will be able to do us any harm.