Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> there is no reason to believe a connection between the mechanical model and what happens in organisms has been established

The universal approximation theorem. And that's basically it. The rest is empirical.

No matter which physical processes happen inside the human brain, a sufficiently large neural network can approximate them. Barring unknowns like super-Turing computational processes in the brain.



The universal approximation theorem is set in a precise mathematical context; I encourage you to limit its applicability to that context despite the marketing label "universal" (which it isn't). Consider your concession about empiricism. There's no empirical way to prove (i.e. there's no experiment that can demonstrate beyond doubt) that all brain or other organic processes are deterministic and can be represented completely as functions.


Function is the most general way of describing relations. Non-deterministic processes can be represented as functions with a probability distribution codomain. Physics seems to require only continuous functions.

Sorry, but there's not much evidence that can support human exceptionalism.


Some differential equations that model physics admit singularities and multiple solutions. Therefore, functions are not the most general way of describing relations. Functions are a subset of relations.

Although "non-deterministic" and "stochastic" are often used interchangeably, they are not equivalent. Probability is applied analysis whose objects are distributions. Analysis is a form of deductive, i.e. mechanical, reasoning. Therefore, it's more accurate (philosophically) to identify mathematical probability with determinism. Probability is a model for our experience. That doesn't mean our experience is truly probabilistic.

Humans aren't exceptional. Math modeling and reasoning are human activities.


> Some differential equations that model physics admit singularities and multiple solutions.

And physicists regard those as unphysical: the theory breaks down, we need better one.


For example, the Euler equations model compressible flow with discontinuities (shocks in the flow field variables) and rarefaction waves. These theories are accepted and used routinely.


Great. A useful approximation of what really happens in the fluid. But I'm sure there are no shocks and rarefactions in physicists' neurons while they are thinking about it.

Switching into a less facetious mode...

Do you understand that in context of this dialogue it's not enough to show some examples of discontinuous or otherwise unrepresentable by NNs functions? You need at least to give a hint why such functions cannot be avoided while approximating functionality of the human brain.

Many things are possible, but I'm not going to keep my mind open to a possibility of a teal Russell's teapot before I get a hint at its existence, so to speak.


I don't understand your point here. A (logical) relation is, by definition, a more general way of describing relations than a function, and it is telling that we still suck at using and developing truly relational models that are not univalent (i.e. functions). Only a few old logicians really took the calculus of relations proper seriously (Pierce, for one). We use functions precisely because they are less general, they are rigid, and simpler to work with. I do not think anyone is working under the impression that a function is a high fidelity means to model the world as it is experienced and actually exists. It is necessarily reductionistic (and abstract). Any truth we achieve through functional models is necessarily a general, abstracted, truth, which in many ways proves to be extremely useful but in others (e.g. when an essential piece of information in the particular is not accounted for in the general reductive model) can be disastrous.


I'm not a big fan of philosophy. The epistemology you are talking about is another abstraction on top of the physical world. But the evolution of the physical world as far as we know can be described as a function of time (at least, in a weak gravitational field when energies involved are well below the grand unification energy level, that is for the objects like brains).

The brain is a physical system, so whatever it does (including philosophy) can be replicated by modelling (a (vastly) simplified version of) underlying physics.

Anyway, I am not especially interested in discussing possible impossibility of an LLM-based AGI. It might be resolved empirically soon enough.


That's not useful by itself, because "anything cam model anything else" doesn't put any upper bound on emulation cost, which for one small task could be larger than the total energy available in the entire Universem


Either the brain violates the physical Church-Turing thesis or it's not.

If it does, well, it will take more time to incorporate those physical mechanisms into computers to get them on par with the brain.

I leave the possibility that it's "magic"[1] aside. It's just impossible to predict, because it will violate everything we know about our physical world.

[1] One example of "magic": we live in a simulation and the brain is not fully simulated by the physics engine, but creators of the simulation for some reason gave it access to computational resources that are impossible to harness using the standard physics of the simulated world. Another example: interactionistic soul.


I mean, that is why they mention super-Turning processes like quantum based computing.


Quantum computing actually isn't super-Turing, it "just" computes some things faster. (Strictly speaking it's somewhere between a standard Turing machine and a nondeterministic Turing machine in speed, and the first can emulate the second.)


If we're nitpicking: quantum computing algorithms could (if implemented) compute certain things faster than the best classical algorithms we know. We don't know any quantum algorithms that are provably faster than all possible classical algorithms.


Well yeah, we haven't even proved that P != NP yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: