Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> If a being could fully comprehend the underlying rules of the universe, it might not need to perform physical experiments at all, it could simply simulate outcomes internally.

It likely couldn't, though, that's the problem.

At a basic level, whatever abstract system you can think of, there must be an optimal physical implementation of that system, the fastest physically realizable implementation of it. If that physical implement was to exist in reality, no intelligence could reliably predict its behavior, because that would imply that they have access to a faster implementation, which cannot exist.

The issue is that most physical systems are arguably the optimal implementation of whatever it is that they do. They aren't implementations of simple abstract ideas like adders or matrix multipliers, they're chaotic systems that follow no specifications. They just do what they do. How do you approximate chaotic systems which, for all you know, may depend on any minute details? On what basis do we think it is likely that there exists a computer circuit that can simulate their outcomes before they happen? It's magical thinking.

Note that intelligence has to simulate outcomes, because it has to control them. It has to prove to itself that its actions will help achieve its goals. Evolution doesn't have this limitation: it's not an agent, it doesn't have goals, it doesn't simulate outcomes, stuff just happens. In that sense it's likely that certain things can evolve that cannot be intelligently designed (as in designed, constructed and then controlled). It's quite possible intelligence itself falls in that category and we can't create and control AGI, and AGI can't improve itself and control the outcome either, and so on.



I agree that computational irreducibility and chaos impose hard limits on prediction. Even if an intelligence understood every law of physics, it might still be unable to simulate reality faster than reality itself, since the physical world is effectively its own optimal computation.

I guess where my speculation comes in is that "simulation" doesn’t necessarily have to mean perfect 1:1 physical emulation. Maybe a higher intelligence could model useful abstractions/approximations, simplified but still predictive frameworks that are accurate enough for control and reasoning even in chaotic domains.

After all, humans already do this in a primitive way, we can't simulate every particle of the atmosphere, but we can predict weather patterns statistically. So perhaps the difference between us and a much higher intelligence wouldn't be breaking physics, but rather having much deeper and more general abstractions that capture reality's essential structure better.

In that sense, it's not "magical thinking", I just acknowledge that our cognitive compression algorithms (our abstractions) are extremely limited. A mind that could discover higher order abstractions might not outrun physics, but it could reason about reality in qualitatively new ways.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: