I have a feeling ASI will follow similar trajectory as fusion, with the critical intelligence explosion always 2 years away. AGI by Turing’s definition is here. But fusion my whole life has been just around the corner…
Is it? AI is impressive and all, but i don't think any of them have pased the Turing test, as defined by Turing (pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes), although i'd be happy to be proven wrong.
> pop culture conceptions of the Turing test are usually much weaker than what the paper actually proposes
I've just read the 1950 paper "Computing Machinery and Intelligence" [1], in which Turing proposes his "Imitation Game" (what's now known as a "Turing Test"), and I think your claim is very misleading.
The "Imitation Game" proposed in the paper is a test that involves one human examiner and two examinees, one being a human and the other a computer, both of which are trying to persuade the examiner that they are the real human; the examiner is charged with deciding which is which. The popular understanding of "Turing Test" involves a human examiner and just one examinee, which is either a human or a computer, and the test is to see whether the examiner can tell.
These are not identical tests -- but if both the real human examinee and the human examiner in Turing's original test are rational (trying to maximise their success rate), and each have the same expectations for how real humans behave, then the examiner would give the same answer for both forms of the test.
Aside: The bulk of this 28-page paper anticipates possible objections to his "Imitation Game" as a worthwhile alternative to the original question "Can machines think?", including a theological argument and an argument based on the existence of extra-sensory perception (ESP), which he takes seriously as it was apparently strongly supported by experimental data at that time. It also cites Helen Keller as an example of how learning can be achieved through any mechanism that permits bidirectional communication between teacher and student, and on p. 457 anticipates reinforcement learning:
> We normally associate punishments and rewards with the teaching process. Some simple child-machines can be constructed or programmed on this sort of principle. The machine has to be so constructed that events which shortly preceded the occurrence of a punishment-signal are unlikely to be repeated, whereas a reward-signal increased the probability of repetition of the events which led up to it.
> These are not identical tests -- but if both the real human examinee and the human examiner in Turing's original test are rational (trying to maximise their success rate), and each have the same expectations for how real humans behave, then the examiner would give the same answer for both forms of the test.
I disagree. Having a control and not having a control is a huge difference when conducting an experiment.
I have a rather specialized interest in and obscure subject but one which has a physical aspect pretty much any person can relate to/reason about, and pretty much every time I try to "discuss" the specifics of it w/ an LLM, it tells me things which are blatantly false, or otherwise attempts to carry on a conversation in a way which no sane human being would.
The LLM is not designed to pass the turing test. An application that suitably prompts the LLM can. It's like asking why can't I drive the nail with the handle of the hammer. That's not what it's for.
LLMs have taught us that there is more than one kind of intelligence. They are definitely intelligent, but not the specific kind of intelligence we were hoping for. We get to be wise after the event and move the goal-posts. It is not so much that they have the wrong kind of intelligence, as that we never suspected the variety of possible forms of intelligence. We are clearly making progress towards ASI, but we don't know how distant the goal really is, because we don't know what the goal actually is.
Fusion is much better understood. We are not going to create "the wrong kind of fusion" and have to come up with a new plan.
Not really - at least at current goals, population size, etc -, even with the very high energy expenditure of, say, LOTS of AI hardware to run the "Skynet" we're driving ourselves into, we're talking the order of magnitude of 30TWh (human generation today per Wikipedia).
Imagining a future: with a ~3% growth, so let's say fusion is deployed and everything is electric in the next few years (not happening that fast though), with AI data-centers everywhere so individual-level AI runs (say "Her"'s movie personal OS-level stuff) per human and we reach the out-of-my-buttocks figure of 500 TWh/year in 10 years time, which is crazy shit ... well, that would not "boil the world"!
The Sun delivers ~170,000 TWh per year. So 500 TWh still would not be that significant, and within the Sun's yearly delivery fluctuations.
The problem with energy generation today is that it's releasing gases, and these gases are disrupting the planet’s energy balance - especially how Earth gets rid of the massive energy it receives from the Sun. We do need to restore the balance between what comes in and what goes back out - fusion can help tackle that problem specifically, so it's beneficial overall even if it eventually adds a fractional percentage to the overall planetary energy bill.
I picture that fusion would be a complementary source, not the only one, and, once/if deployed, would help close some of key the loopholes that prevent solar (and other renewables) from being deployed 100%.
It delivers 170,000 TWh per hour (i.e. 170,000 TW)!
3.14 * (6378km)^2 * 1300W/m^2 = 166PW
It's a ludicrous amount of energy - roughly the entire human annual energy usage is delivered every 70 minutes. The whole problem of AGW is that even a tiny modulation in absolute terms of things that affect the steady state (i.e. greenhouse gases) can have substantial effects. But it's also, presumably, going to be key to fixing the problem, if we do fix it.
Could that actually work? You'd have to expend energy to concentrate the heat into your power generation system to power the (I assume) a laser or similar emitter to beam the energy away. Would you be able to make sure that the extra energy used to move the heat around, plus inefficiencies in the laser power generation gets included in the outgoing photons? This seems, perhaps naively, like the entropy is going the "wrong" way.
You could presumably radiate it to space by moving the heat to something that can "see" a clear sky, but you can have this happen naturally on a far huger scale by reducing GHG content in the atmosphere and increasing the radiative efficiency of the entire planet surface, as well as various passive systems like cool roofs, albedo manipulation and special materials that radiate specific wavelengths.
Yes, it would require radiative surfaces with sky view. Such systems are already in use, including surfaces that get much cooler than air temperature, in the sun on a warm sunny day.
When you cool a building or a data center or whatever, you can pump that heat into a high temperature fluid and send it to a sky-radiator instead of sending it to an air-exchange radiator. So heat produced in processes could be moved to radiator assemblies and “beamed” into space (I probably should have said radiated).
“Energy” is a colloquially ambiguous term. The better terms are available energy (exergy) and entropy.
The Earth radiates away almost exactly as much energy as it receives. It has to. Otherwise it would boil. Our biosphere, however, extracts a lot of available energy from that system. That results in the Sun shining low-entropy energy on the Earth, and the Earth radiating high-entropy radiation away.
Put another way, a universe that is homogenous at 10 million degrees has plenty of energy. But it has zero useful energy because you have no entropy gradient.
You can stop worrying, because fusion energy from this kind of reactor will be anything but cheap. It will likely be more expensive than energy from current generation fission power plants.
The goal posts on AGI would be superluminal and somewhere back in the 1400s if they were physical objects. I’ve never seen or heard of a field so deeply in denial about its progress.
For every major trouncing of criterion we somehow invent 4 or 5 new requirements for it to be “real” AGI. Now , it seems, AGI must display human level intelligence at superhuman speed (servicing thousands of conversations at once), be as knowledgeable as the most knowledgeable 0.1% of humans across every facet of human knowledge, be superhumanly accurate, perfectly honest, and not make anything up.
I remember when AGI meant being able to generalize knowledge over problems not specifically accounted for in the algorithm… the ability to exhibit the “generalization” of knowledge, in contrast to algorithmic knowledge or expert systems. It was often referred to as “mouse level” or sometimes “dog-level” intelligence. Now we expect something vastly more capable than any being that has ever existed or it’s not “AGI” lmfao. “ASI” will probably have to solve all of the world’s problems and bring us all to the promised land before it will merit that moniker lol.
"I remember when AGI meant being able to generalize knowledge over problems not specifically accounted for in the algorithm… "
So do we have that?
As far as I know, we just have very, very large algorithms (to use your terminology). Give it any problem not in the training data and it fails.
Same goes for most animals and humans, the vast majority of the time. We expect consistent savant level performance or it’s not “AGI” if humans were good at actual information synthesis, Einstein and Tom Robbins would be everyone’s next door neighbors.
As a sounding board and source of generally useful information, even my small locally hosted models generally outperform a substantial slice of the population.
We all know people we would not ask anything that mattered, because their ideas and opinions are typically not insightful or informative. Conversing with a 24b model is likely to have higher utility. Do these people then not exhibit “general intelligence”? I really think we generally accept pattern matching and next-token ramblings, hallucinations, and rampant failures of reasoning in stride from people, while applying a much, much higher bar to LLMs.
To me this makes no sense, because LLMs are compilations of human culture and their only functionality is to replicate human behavior. I think on average they do a pretty good job vs a random sampling of people, most of the time.
I guess we see this IRL when we internally label some people as “NPC’s”.
"As a sounding board and source of generally useful information, even my small locally hosted models generally outperform a substantial slice of the population."
So does my local copy of Wikipedia.
But the lines do get blurry and many real humans indeed seem no more than stochastical parrots pretending understanding.
> “ASI” will probably have to solve all of the world’s problems and bring us all to the promised land before it will merit that moniker lol.
People base their notions of AI on science fiction, and it usually goes one of two ways in fiction.
Either a) skynet awakens and kills us all or
B) the singularity happens, AI get so far ahead they become deities, and maybe the chosen elect transhumanists get swept up into some simulation that is basically a heavenly realm or something.
So yeah, bringing us to the promised land is an expectation of super AI that does seem to come out of certain types of science fiction.