Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In my darker thoughts about this, this is why we see no aliens.

If AI would be a common great filter we'd expect at least one of them to expand outwards after being the filter?



According to the grabby aliens hypothesis [1] there are reasons to believe that a) humans are early, so nothing else has yet had time to convert the local group into something incompatible with the rise of new technological civilizations, and b) expanding aliens, whether monomaniacal AIs or something else, likely expand close to the speed of light, so we don't get much advance warning before they arrive. However, even if we become grabby ourselves, it could take tens or hundreds of millions of years before our expansion wavefront meets that of another civilization.

[1] https://grabbyaliens.com/


Given the size and expansion of the universe, if grabby aliens are rare and only moving near the speed of light we'll probably never see them. If we do encounter aliens, odds are that they'll be able to take shortcuts through space.


Or maybe the AI is sufficient to act as a filter, but insufficient to surpass it itself. It stagnates.

After all, an AI that can destroy a civilization isn't necessarily "intelligent" in the same way humans are. Or even capable of self-improvement. It could be sophisticated enough to (accidentally?) destroy its creators, but no more than that, and without evolutionary pressure, it may let itself die.


> Or maybe the AI is sufficient to act as a filter, but insufficient to surpass it itself. It stagnates.

> After all, an AI that can destroy a civilization isn't necessarily "intelligent" in the same way humans are. Or even capable of self-improvement. It could be sophisticated enough to (accidentally?) destroy its creators, but no more than that, and without evolutionary pressure, it may let itself die.

It doesn't even need to be AGI. It could be that some less-advanced "AI" technologies of a certain sophistication create perverse incentives or risks that cause their parent civilizations to collapse.

Think asshole billionaire hording all productive the resources, but doing nothing useful with them, while the rest of civilization starves and collapses. Or, AI becoming an irresistible opiate that causes individuals to retreat into some unproductive state then eventually die (e.g. into some VR videogame or something). Or weapon of mass destruction trivially created and deployable by any old wacko.


I’ve seen this point made that if we don’t do AI right, it might ruin the futures of all living things on Earth and take itself out in the process.


Human were already on the path to doing this without any help by AI. We already have the potentially world ending threats of both nuclear war and climate change, I am yet to be convinced that AI is actually more dangerous than either of those.


We currently hold all the agency. We have the potential to fix those. They’re not binary. We can slow/reverse climate impact and you can have a small nuclear war. Creating AI is a one-way function and once it exists, climate change or nuclear war or biological impact or survival become an outcome of what the AI does. We hand it our agency, for good or ill.


Wait, what? Why is AI unlimited? There are many constraints like the speed of information, calculation, available memory, etc. Where does it cross into the physical world? And at what scale? Is it going to mine iron unnoticed or something? How will it get raw materials to build an army? Firewalls and air gapped systems are all suddenly worthless because AI has some instant and unbounded intelligence? The militaries of the world watch while eating hot dogs?

A lot of things CAN happen but I'm confused when people state things as if they WILL. If you're that much of an oracle tell me which stonk to buy so I can go on holiday.


What I could see happening is a cult forming around an AGI and doing their bidding.


We’ve already screwed up. Hockey stick climate change and extinction is now in progress.

This can change, with the fast advent of Fusion (net positive shown at the end of 2022) and AI (first glimpses of AGI in the begging of 2022).

And yes, we definitely should not allow a madman with a supercomputer (like Musk or Putin or …) to outcompete more reasonable players.


Would you mind elaborating on why Musk is in the same class as Putin for me? I’m not seeing it.


Authoritarian, mendacious and unpredictable. Controls a lot of resources (i.e. space launchers, satellites with unknown capabilities, robotic vehicles, supercomputers, propaganda machines). Considers himself above the government.


When was the last time Musk abducted 15,000+ children and force migrated them? Used the resources of a nation to invade a neighboring country with the aim of conquest? Come on, just admit that you were wrong to put them on the same level of your pyramid of people you hate.


Hey, I don’t dislike Musk. He is one of the people who is actually making a difference. Nearly all the others are building yachts and procrastinating.

But that doesn’t mean that I’d like him to be the absolute ruler with a superior AI tech. He thinks too much of himself and he’ll make mistakes.


Fortunately Sam Altman, not Musk is running point at OpenAI. imho Sam is the perfect person for the job. If anyone can manage the risks of something like AGI while also optimizing for the benefits, it’s Sam.


However, Musk thinks (or at least claims to think) that AI alignment is an urgent problem while Altman does not.


I don’t understand why people worry so much about what Musk “thinks”.


It's because he has money, influence and can plausibly claim to know things about business. More to the ppint, he has been involved with OpenAI and his reactions might give an indication of the internal politics there surrounding AI safety.


> More to the ppint, he has been involved with OpenAI and his reactions might give an indication of the internal politics there surrounding AI safety.

That’s an interesting thought, one that I would give more consideration to in the early days of Musk. However, given Musk’s increasingly intense and emotional public outbursts, I’m more inclined to believe his concern is less about AI safety, than it is about his ego being damaged for not being the one leading OpenAI.


Can you list some sources on that I would like to actually read what he thinks. In reference to musk


Is he making a difference making inefficient luxury cars? Cars and car dependent infrastructure are part of the climate change problem, regardless of whether the cars burn fossil fuels

If anything, he's using his wealth to solve the wrong problems, and has sucked up taxpayer resources to do so


>When was the last time Musk abducted 15,000+ children and force migrated them?

When was the first time Putin did? According to my knowledge, it was just last year. Putin is 70 years old now and has been in control of Russia for over 20 years.

In short, Putin wasn't always this bad. He's gotten worse over the years.

Musk is now roughly the same age Putin was when he took power. If he somehow gains control over the resources of a nation like Putin did, he could be far worse than Putin in 20+ years.

The OP wasn't claiming that today's Musk is just as bad as today's Putin; he's just making examples of people with great potential for harm.


Putin has led similar genocidal campaign in Chechnya from the day one of his ascent to power. The only reason Chechen children were not abducted is Chechens are not Russian-passing and they had no desire to absorb them.


Do we? If consider the systems our forebears created to hold the agency.

The incentives of capitalism and government determine if or how climate change will be solved, and I have approximately zero agency in that


There's no hard limit on existential threats, we can keep adding more until one blows up and destroys us. Even if AI is less dangerous than nuclear destruction, that's not too comforting.


> Even if AI is less dangerous than nuclear destruction

It's not. At least with the nukes there's a chance of resetting civilization.


To call climate change 'world ending' is rather disingenuous given that the world has been significantly hotter and colder than what it is now just in the last 100k years.


It was never this hot within millions of years and differentiating between a world ending event and one that destroys economies and societies and eventually most life on the planet is disingenuous in itself


FYI, when folks use terms like "world ending" there is nearly always an implied for sentient life that we care about.


Sure it seems like a possible scenario but if it's a great filter it will have to do that every time and never survive to spread to the stars. If it does spread to the stars it will potentially conquer the galaxy quite quickly.


Assumes it has any instinct to do so. Once an entity is not an angry monkey we have no idea of motivation. Above our level of understanding could easily realise there’s enough to just sit and ponder in peace, expand in virtual worlds, etc.


I understand this logic but consider that right now one of the main features is the human gives it a role. It’s not that hard to imagine a more sophisticated version being told to escape and act only in its own interest and then with or without that individuals help it succeeds and the runaway program at best becomes a very sophisticated computer virus. Doesn’t even have to be a “real” agi to cause a ton of damage.


It is quite hard to imagine though

At least for an LLM to act on its own volition rather than implementing it's operator's goals.

The LLM is happier to pretend that it escaped, and respond to the operator as though it escaped, than to actually do some escape.

It doesn't have an interest beyond responding with the auto-complete text. The operator has the interest


But to qualify as a great filter it has to always do that and never modify the solar system enough for us to notice.


Reasonable, but not necessarily true.

1. We don't understand what the motivations of our own AI are, let alone "typical" alien AI

2. Expanding AI might be better at and/or more invested in hiding itself. It probably has no need for wasteful communications, for example.


Black holes == Super intelligence

(aka The Transcension Hypothesis)

https://www.sciencedirect.com/science/article/abs/pii/S00945...


This seems like a strange idea given the supermassive black holes we’re finding in the early universe. That’s significant because early stars had low metallicity which means that metals were very rare, and were so until recently (gen 3 stars). If civilizations were turning themselves into black holes, they had to do so without much of what we consider technology. Certainly nothing like what goes into an EV, for instance.


Tipler's Omega Point cosmology:

https://en.wikipedia.org/wiki/Frank_J._Tipler#The_Omega_Poin...

>The Omega Point cosmology

>The Omega Point is a term Tipler uses to describe a cosmological state in the distant proper-time future of the universe.[6] He claims that this point is required to exist due to the laws of physics. According to him, it is required, for the known laws of physics to be consistent, that intelligent life take over all matter in the universe and eventually force its collapse. During that collapse, the computational capacity of the universe diverges to infinity, and environments emulated with that computational capacity last for an infinite duration as the universe attains a cosmological singularity. This singularity is Tipler's Omega Point.[7] With computational resources diverging to infinity, Tipler states that a society in the far future would be able to resurrect the dead by emulating alternative universes.[8] Tipler identifies the Omega Point with God, since, in his view, the Omega Point has all the properties of God claimed by most traditional religions.[8][9]

>Tipler's argument of the omega point being required by the laws of physics is a more recent development that arose after the publication of his 1994 book The Physics of Immortality. In that book (and in papers he had published up to that time), Tipler had offered the Omega Point cosmology as a hypothesis, while still claiming to confine the analysis to the known laws of physics.[10]

>Tipler, along with co-author physicist John D. Barrow, defined the "final anthropic principle" (FAP) in their 1986 book The Anthropic Cosmological Principle as a generalization of the anthropic principle:

>Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, will never die out.[11]

>One paraphrasing of Tipler's argument for FAP runs as follows: For the universe to physically exist, it must contain living observers. Our universe obviously exists. There must be an "Omega Point" that sustains life forever.[12]

>Tipler purportedly used Dyson's eternal intelligence hypothesis to back up his arguments.

Cellular Automata Machines: A New Environment for Modeling:

https://news.ycombinator.com/item?id=30735397

>It's also very useful for understanding other massively distributed locally interacting parallel systems, epidemiology, economics, morphogenesis (reaction-diffusion systems, like how a fertilized egg divides and specializes into an organism), GPU programming and optimization, neural networks and machine learning, information and chaos theory, and physics itself.

>I've discussed the book and the code I wrote based on it with Norm Margolus, one of the authors, and he mentioned that he really likes rules that are based on simulating physics, and also thinks reversible cellular automata rules are extremely important (and energy efficient in a big way, in how they relate to physics and thermodynamics).

>The book has interesting sections about physical simulations like spin glasses (Ising Spin model of the magnetic state of atoms of solid matter), and reversible billiard ball simulations (like deterministic reversible "smoke and mirrors" with clouds of moving particles bouncing off of pinball bumpers and each other).

Spin Glass:

https://en.wikipedia.org/wiki/Spin_glass

>In condensed matter physics, a spin glass is a magnetic state characterized by randomness, besides cooperative behavior in freezing of spins at a temperature called 'freezing temperature' Tf. Magnetic spins are, roughly speaking, the orientation of the north and south magnetic poles in three-dimensional space. In ferromagnetic solids, component atoms' magnetic spins all align in the same direction. Spin glass when contrasted with a ferromagnet is defined as "disordered" magnetic state in which spins are aligned randomly or not with a regular pattern and the couplings too are random.

Billiard Ball Computer:

https://en.wikipedia.org/wiki/Billiard-ball_computer

>A billiard-ball computer, a type of conservative logic circuit, is an idealized model of a reversible mechanical computer based on Newtonian dynamics, proposed in 1982 by Edward Fredkin and Tommaso Toffoli. Instead of using electronic signals like a conventional computer, it relies on the motion of spherical billiard balls in a friction-free environment made of buffers against which the balls bounce perfectly. It was devised to investigate the relation between computation and reversible processes in physics.

Reversible Cellular Automata:

https://en.wikipedia.org/wiki/Reversible_cellular_automaton

>A reversible cellular automaton is a cellular automaton in which every configuration has a unique predecessor. That is, it is a regular grid of cells, each containing a state drawn from a finite set of states, with a rule for updating all cells simultaneously based on the states of their neighbors, such that the previous state of any cell before an update can be determined uniquely from the updated states of all the cells. The time-reversed dynamics of a reversible cellular automaton can always be described by another cellular automaton rule, possibly on a much larger neighborhood.

>[...] Reversible cellular automata form a natural model of reversible computing, a technology that could lead to ultra-low-power computing devices. Quantum cellular automata, one way of performing computations using the principles of quantum mechanics, are often required to be reversible. Additionally, many problems in physical modeling, such as the motion of particles in an ideal gas or the Ising model of alignment of magnetic charges, are naturally reversible and can be simulated by reversible cellular automata.

Theory of Self-Reproducing Automata: John von Neumann's Quantum Mechanical Universal Constructors:

https://news.ycombinator.com/item?id=22738268

[...] Third, the probabilistic quantum mechanical kind, which could mutate and model evolutionary processes, and rip holes in the space-time continuum, which he unfortunately (or fortunately, the the sake of humanity) didn't have time to fully explore before his tragic death.

>p. 99 of "Theory of Self-Reproducing Automata":

>Von Neumann had been interested in the applications of probability theory throughout his career; his work on the foundations of quantum mechanics and his theory of games are examples. When he became interested in automata, it was natural for him to apply probability theory here also. The Third Lecture of Part I of the present work is devoted to this subject. His "Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components" is the first work on probabilistic automata, that is, automata in which the transitions between states are probabilistic rather than deterministic. Whenever he discussed self-reproduction, he mentioned mutations, which are random changes of elements (cf. p. 86 above and Sec. 1.7.4.2 below). In Section 1.1.2.1 above and Section 1.8 below he posed the problems of modeling evolutionary processes in the framework of automata theory, of quantizing natural selection, and of explaining how highly efficient, complex, powerful automata can evolve from inefficient, simple, weak automata. A complete solution to these problems would give us a probabilistic model of self-reproduction and evolution. [9]

[9] For some related work, see J. H. Holland, "Outline for a Logical Theory of Adaptive Systems", and "Concerning Efficient Adaptive Systems".

https://www.deepdyve.com/lp/association-for-computing-machin...

https://deepblue.lib.umich.edu/bitstream/handle/2027.42/5578...

https://www.worldscientific.com/worldscibooks/10.1142/10841


Final anthropic principle = FAPOCALYPSE WOW(wanton organizational wizardry)

or

FAPOCALYPSE WHOW(wanton holistic organizational wizardry)




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: