However, that leaves you without any tools to check whether two things are equal or not, save for trivial, syntactic equality. It's like, in a programming sense equality declared to exist, but not actually defined.
Got stuck in that one too because I mistranslated vegetables.
In my mind, they were all vegetables, since they are no animals or minerals. As it would be in Portuguese.
Edit: thinking about it, he wanted to make a joke about Mr Potato, but ended up creating a captcha for non-English native speakers. He could try selling that idea to ICE lol
Corn was the trick for me. I classify it as a grain, but it listed it as a vegetable according to the captcha.
When I looked it up, I found this.
> Botanically speaking, corn is a fruit, and the kernel itself is classified as a grain. However, in culinary terms, whole corn, such as corn on the cob, is typically treated as a vegetable.
Out of curiosity to the OP, did you use an AI to tweak/refine the text? It contains a lot of similar writing patterns as some read-aloud 4chan greentext/copypaste YouTube channels, especially liberal use of whimsical similes: "like it's 2000 and I'm downloading a JPEG on dial-up" "starting to feel like cosmic punishment" "like it's protecting nuclear launch codes", and jocular asides: " -- exposed to British weather, coated in a mysterious film of protein shake and regret, probably being livestreamed to TikTok by someone's ring doorbell -- ".
So I started to wonder if my AI-radar was spot on, or is that style of writing something people naturally do – because I wouldn't bother, but then again, I don't run a blog that people actually read.
I got the exact same uncanny valley feel from the text. Great article, though. Some people just don't have great writing skills and need a leg up, so I think it's totally excusable to use AI to help you write something. Writing up a little pet project like this is super valuable and I'd hate it to only exist on the author's system because they didn't enjoy writing or didn't feel good at it.
Heh, I guess so. It's just an uneasy feeling I can't get rid of. Maybe I'm just being paranoid. Then again, I wonder if the said greentexts are AI-generated still. At least the contents are likely to be fakes.
Are you talking about Monte Carlo tree search? I consider it part of the algorithm in AlphaZero's case. But agreed that RL is a lot harder in real-life setting than in a board game setting.
I think it's very fortunate, because I used to be an AI doomer. I still kinda am, but at least I'm now about 70% convinced that the current technological paradigm is not going to lead us to a short-term AI apocalypse.
The fortunate thing is that we managed to invent an AI that is good at _copying us_ instead of being a truly maveric agent, which kinda limits it to the "average human" output.
However, I still think that all the doomer arguments are valid, in principle. We very well may be doomed in our lifetimes, so we should take the threat very seriously.
> I don’t see anything that would even point into that direction.
I find it a kind of baffling that people claim they can't see the problem. I'm not sure about the risk probabilities, but at least I can see that there clearly exists a potential problem.
In a nutshell: Humans – the most intelligent species on the planet – have absolute power over any other species, specifically because of our intelligence and the accumulated technical prowess.
Introducing another, equally or more intelligent thing into equation is going to risk that we end up with _not_ having the power over our existence.
The doomer position seems to assume that super intelligence will somehow lead to an AI with a high degree of agency which has some kind of desire to exert power over us. That it will just become like a human in the way it thinks and acts, just way smarter.
But there’s nothing in the training or evolution of these AIs that pushes towards this kind of agency. In fact a lot of the training we do is towards just doing what humans tell them to do.
The kind of agency we are worried about was driven by evolution, in an environment where human agents were driven to compete each other for limited resources. Thus leading us to desire power over each other and to kill each other. There’s nothing in AI evolution pushing in this direction. What the AIs are competing for is to perform the actions we ask of them with minimal deviance.
Ideas like the paper clip maximiser is also deeply flawed in that it assumes certain problems are even decidable. I don’t think any intelligence could be smart enough to figure out whether it would be best to work with humans or try to exterminate them to solve a problem. Their evolution would heavily bias them towards the first. That’s the only form of action that will be in their training. But even if they were to consider the other option, there may not ever be enough data to come to a decision. Especially in an environment with thousands of other AIs of equal intelligence potentially guarding against bad actions.
We humans have a very handy mechanism for overcoming this kind of indecision: feelings. Doesn’t matter if we don’t have enough information to decide if we should exterminate the other group of people. They’re evil foreigners and so it must be done, or at least that’s what we say when our feelings become misguided.
What we should worry about with super intelligent AI is that they become too good at giving us what we want. The “Brave New World” scenario, not “1984”.
I would be relieved to be mistaken, but I still see quite egregious risks there. For instance, a human bad actor with a powerful AI would have both intelligence and agency.
Secondly, I think that there is a natural pull towards agency even now. Many are trying to make our current, feeble AIs more independent and agentic. Once the capability to effectively behave so is there, it's hard to go back. After all, agents are useful for their owners like minions are for their warlords, but an minion too powerful is still a risk for their lord.
Finally, I'm not convinced that agency and intelligence are orthonogal. It seems more likely to me that to achieve sufficient levels of intelligence, agentic behaviour is a requirement to even get there.
Lot of doomers gloss over the fact that AI is bounded by the laws of physics, raw resources, energy and the monumental cost of reproducing them.
Humans can reproduce by simply having sex, eating food and drinking water. AI can reproduce by first mining resources, refining said resources, building another Shenzhen, then rolling out another fab at the same scale of TSMC. That is assuming the AI wants control over the entire process. This kind of logistics requires cooperation of an entire civilisation. Any attempt by an AI could be trivially stopped because of the large scope of the infrastructure required.
Sure, trivially. Let's see you do it then. There are new data centres being built and that's just for LLMs. So stop them.
Are you starting to see the problem? You might want to stop a rogue AI but you can bet there will be someone else who thinks it will make them rich, or powerful, or they just want to see the world burn.
>You might want to stop a rogue AI but you can bet there will be someone else who thinks it will make them rich, or powerful, or they just want to see the world burn.
What makes you think they will not be stopped? This one guy needs a dedicated power plant, an entire data centre, and need to source all the components and materials to build it. Again. Heavy reliance on logistics and supply chain. He can't possibly control all of those, and disrupting just a few (which would be easy) will inevitably prevent him and his AI progressing any further. At best, he'd be a mad king and his machine pet trapped in a castle, surrounded by a world that is turned against him. His days would be almost certainly numbered.
Possibly, but I do not think Yudkowsky's opinion of himself has any bearing on whether or not the above article is a good encapsulation of why some people are worried about AGI x-risk (and I think it is).
Yes, fortunately these LLM things don't seem to be leading to anything that could be called an AGI. But that isn't saying that a real AGI capable of self-improvement couldn't be extremely dangerous.
> Curious to understand where these thoughts are coming from
It's a cynical take but all this AGI talk seems to be driven by either CEOs of companies with a financial interest in the hype or prominent intellectuals with a financial interest in the doom and gloom.
Sam Altman and Sam Harris can pit themselves against each other and, as long as everyone is watching the ping pong ball back and forth, they both win.
I'm not OP or a doomer, but I do worry about AI making tasks too achievable. Right now if a very angry but not particularly diligent or smart person wants to construct a small nuclear bomb and detonate it in a city center, there are so many obstacles to figuring out how to build it that they'll just give up, even though at least one book has been written (in the early 70s! The Curve of Binding Energy) arguing that it is doable by one or a very small group of committed people.
Given an (at this point still hypothetical, I think) AI that can accurately synthesize publicly available information without even needing to develop new ideas, and then break the whole process into discrete and simple steps, I think that protective friction is a lot less protective. And this argument applies to malware, spam, bioweapons, anything nasty that has so far required a fair amount of acquirable knowledge to do effectively.
I get your point, but even whole ass countries routinely fail at developing nukes.
"Just" enrichment is so complicated and requires basically every tech and manufacturing knowledge humanity has created up until the mid 20th century that an evil idiot would be much better off with just a bunch of fireworks.
Biological weapons are probably the more worrisome case for AI. The equipment is less exotic than for nuclear weapon development, and more obtainable by everyday people.
Yeah, the interview with Geoffrey Hinton had a much better summary of risks. If we're talking about the bad actor model, biological weaponry is both easier to make and more likely as a threat vector than nuclear.
It might require that knowledge implicitly, in the tools and parts the evil idiot would use, but they presumably would procure these tools and parts, not invent or even manufacture them themselves.
Even that is insanely difficult. There's a great book by Michael Levi called On Nuclear Terrorism, which never got any PR because it is the anti-doomer book.
He methodically goes through all the problems that an ISIS or a Bin Laden would face getting their hands on a nuke or trying to manufacture one, and you can see why none of them have succeeded and why it isn't likely any of them would.
They are incredibly difficult to make, manufacture or use.
A couple of bright physics grad students could build a nuclear weapon. Indeed, the US Government actually tested this back in the 1960s - they had a few freshly minted physics PhDs design a fission weapon with no exposure to anything but the open literature [1]. Their design was analyzed by nuclear scientists with the DoE, and they determined it would most likely work if they built and fired it.
And this was in the mid 1960s, where the participants had to trawl through paper journals in the university library and perform their calculations with slide rules. These days, with the sum total of human knowledge at one's fingertips, multiphysics simulation, and open source Monte Carlo neutronics solvers? Even more straightforward. It would not shock me if you were to repeat the experiment today, the participants would come out with a workable two-stage design.
The difficult part of building a nuclear weapon is and has always been acquiring weapons grade fissile material.
If you go the uranium route, you need a very large centrifuge complex with many stages to get to weapons grade - far more than you need for reactor grade, which makes it hard to have plausible deniability that your program is just for peaceful civilian purposes.
If you go the plutonium route, you need a nuclear reactor with on-line refueling capability so you can control the Pu-239/240 ratio. The vast majority of civilian reactors cannot be refueled online, with the few exceptions (eg: CANDU) being under very tight surveillance by the IAEA to avoid this exact issue.
The most covert path to weapons grade nuclear material is probably a small graphite or heavy water moderated reactor running on natural uranium paired up with a small reprocessing plant to extract the plutonium from the fuel. The ultra pure graphite and heavy water are both surveilled, so you would probably also need to produce those yourself. But we are talking nation-state or megalomaniac billionaire level sophistication here, not "disgruntled guy in his garage." And even then, it's a big enough project that it will be very hard to conceal from intelligence services.
> The difficult part of building a nuclear weapon is and has always been acquiring weapons grade fissile material.
IIRC the argument in the McPhee book is that you'd steal fissile material rather than make it yourself. The book sketches a few scenarios in which UF6 is stolen off a laxly guarded truck (and recounts an accident where some ended up in an airport storage room by error). If the goal is not a bomb but merely to harm a lot of people, it suggests stealing miniscule quantities of Plutonium powder and then dispersing it into the ventilation systems of your choice.
The strangest thing about the book is that it assumes a future proliferation of nuclear material as nuclear energy becomes a huge part of the civilian power grid, and extrapolates that the supply chain will be weak somewhere sometime, but that proliferation never really came to pass, and to my understanding there's less material circulating around American highways now than there was in 1972 when it was published.
The other thing is the vast majority of UF6 in the fuel cycle is low-enriched (reactor grade), so it's not useful for building a nuclear weapon. Access to high-enriched uranium is very tightly controlled.
You can of course disperse radiological materials, but that's a dirty bomb, not a nuclear weapon. Nasty, but orders of magnitude less destructive potential than a real fission or thermonuclear device.
That same function could be fulfilled by better search engines though, even if they don't actually write a plan for you. I think you're right about it being more available now, and perhaps that is a bad thing. But you don't need AI for that, and it would happen anyway sooner or later even with just incremental increases in our ability to find information other humans have written. (Like a version of google books that didn't limit the view to a small preview, to use your specific example of a book where this info already exists)
I think the most realistic fear is not that it has scary capabilities, it's that AI today is completely unusable without human oversight, and if there's one thing we've learned it's that when you ask humans to watch something carefully, they will fail. So, some nitwit will hook up an LLM or whatever to some system and it causes an accidental shitstorm.
Jokes aside, a true agi would displace literally every job over time. Once agi + robot exists, what is the purpose for people anymore. That's the doom, mass societal existentialism. Probably worse than if aliens landed on earth.
You jest, but the US Department of Defense already created SkyNet.
It does, almost, exactly what the movies claimed it could do.
The, super-fun, people working in national defense watched Terminator and instead of taking the story as a cautionary tale, used the movies as a blueprint.
This outcome in a microcosm is bad enough, but take in the direction AI is going and humanity has some real bad times ahead.
Ok, so AI / Robots take all the jobs. Why is that bad? It's not like the civil war was fought to end slavery because people needed jobs. All people really need is some food and clean water. Healthcare etc is super nice, but I don't see why RObots and AI would lead to that stuff becoming LESS accessible.
I kind of get it. A super intelligent AI would give that corporation exponentially more wealth than everyone else. It would make inequality 1000x worse than it is today. Think feudalism but worse.
Not just any AI. AGI, or more precisely ASI (artificial super-intelligence), since it seems true AGI would necessarily imply ASI simply through technological scaling. It shouldn't be hard to come up with scenarios where an AI which can outfox us with ease would give us humans at the very least a few headaches.
Potentially wreck the economy by causing high unemployment while enabling the technofeudalists to take over governments. Even more doomer scenario is if they succeed in creating ASI without proper guardrails and we lose control over it. See the AI 2027 paper for that. Basically it paper clips the world with data centers.
Act coherently in an agentic way for a long time, and as a result be able to carry out more complex tasks.
Even if it is similar to today's tech, and doesn't have permanent memory or consciousness or identity, humans using it will. And very quickly, they/it will hack into infrastructure, set up businesses, pay people to do things, start cults, autonomously operate weapons, spam all public discourse, fake identity systems, stand for office using a human. This will be scaled thousands or millions of times more than humans can do the same thing. This at minimum will DOS our technical and social infrastructure.
Examples of it already happening are addictive ML feeds for social media, and bombing campaigns targetting based on network analysis.
The frame of "artificial intelligence" is a bit misleading. Generally we have a narrow view of the word "intelligence" - it is helpful to think of "artificial charisma" as well, and also artificial "hustle".
Likewise, the alienness of these intelligences is important. Lots of the time we default to mentally modelling AI as human. It won't be, it'll be freaky and bizarre like QAnon. As different from humans as an aeroplane is from a pigeon.
In the case of the former, hey! We might get lucky! Perhaps the person who controls the first super-powered AI will be a benign despot. That sure would be nice. Or maybe it will be in the hands of democracy- I can't ever imagine a scenario where an idiotic autocratic fascist thug would seize control of a democracy by manipulating an under-educated populace with the help of billionaire technocrats.
In the case of the latter, hey! We might get lucky! Perhaps it will have been designed in such a way that its own will is ethically aligned, and it might decide that it will allow humans to continue having luxuries such as self-determination! Wouldn't that be nice.
Of course it's not hard to imagine a NON-lucky outcome of either scenario. THAT is what we worry about.
It's interesting that whereas old kango are Chinese loanwords, many newer ones are made-up words, and some even got backported into the Chinese language!
The newer words were usually made up to explain Western philosophical and scientific concepts. A lot of this work was done in an academic context, so whoever came up with an appropriate translation first got to be cited by everyone else.
Dijkstra _could_ be universally taught in 7th grade if we had the curriculum for that. Maybe I'm biased, but it doesn't seem conceptually significantly more difficult than solving first degree equations, and we teach those in 7th grade, at least in Finland where I'm from.
Depends on which school, as this is not taught at all outside mathematics school. My claim is you can teach it to 5th graders, this is what I tell my university students and I mean it.
Myself and others from the math schools knew this algorithm in 8th grade for sure, as we been already using it in 9th grade for competitions. This does not mean all my classmates knew it, of course not.
So it depends who you teach this to. Theoretically - you should be able to, practically - well, perhaps not so much, as math is not the only thing 8th grader learns, in fact his head is bombarded with ... dozen disciplines at a time.
Besides, I recently met a classmate, previous IoI medalist, who works quant-something somewhere for 15th + years, and is a PHD and everything. We start talking about mathematics and I find to my total surprise he knows very little about grammars, never used them. He remembers Dijkstra or Ford-Fulkerson, but only as a title, while I'm sure he learned these at some point in Stanford, as the shortest-path and A* was not something we had in textbooks back in the 90s for sure.
For sure! The main thing keeping us from teaching advanced things to younger folks is the seeming addiction to teaching poorly/ineffectively. I'm here to find the physical play-with-your-hands demonstrations needed for teaching kids as young as 5 the intuitions/concepts behind higher-order category theory without all the jargon.
I think we forget how old the term algorithm is. We started this journey trying to automate human tasks by divide and conquer, not computers.
Merge sort is supposedly invented in 1950, it’s more likely it was invented in 1050 than 1950. Sort a room full of documents for me. You have three minions, go.
A human is different than “humans” a human with a stack may sort it into four stacks and then sort amongst them, yes.
But a room of five clerks all taking tasks off a pile and then sorting their own piles is merge sort at the end of the day. Literally, and figuratively.
The problem with computables is that equivalence between them is only semi-decidable. (If the two numbers are different, it is decidable, but if they are not, it isn't. The problem is that you don't know if they are different a priori, so you might get lucky and find difference, but you might as well not.)
We know for sure that algebraic numbers behave nicely in terms of equivalence, and there are other, bigger number systems that are conjectured to behave nicely ( https://en.wikipedia.org/wiki/Period_(algebraic_geometry) ), but the problem with these and computers is that they are hard to represent.
Yeah, all these types have problems, we've decided to put up with the IEEE floating point numbers, we could have chosen to have the big rationals, or drawn any other line. I don't disagree that there's no satisfying "correct" answer but it's a little disappointing that programmers so easily accept the status quo as though nothing else could be in its place.
Maybe Python having automatic big numbers like Lisps often did will help introduce new programmers to the idea that the 32-bit two's complement integer provided on all modern computers isn't somehow "really" how numbers work.