I always thought this was the only way to build a true AI -- build a 'virtual baby' that has to go through much the same experiences as a human baby. I'm sure this idea has been explored somewhere already - anybody have any pointers?
This is precisely the approach taken at MIT under Prof. Rod Brooks: "Within our group, Marjanovic, Scassellati & Williamson(1996) applied a similar bootstrapping technique to enable the robot* to learn to point to a visual target. "
*named Cog, short for "Cognition"
If I may paraphase, his model is biologically inspired -- believing hierarchical layers of behaviours, lack of a central planning model (distributed processing), and physical and temporal placement in the world (rather than abstractions of the world, or observe/process/react loops) are essential to the formation of a truly intelligent machine.
I have long believed that we aren't going to get an AI that matches biological intelligence until it has to fend for itself. Nearly every biological function is linked to the fundamental need to survive through scarcity in hostile environments in order to reproduce and pass along genetic material. Without that need, there would not have been such rigorous evolutionary arcs that ultimately brought humans to this earth.
The question of course, is, how could we deal with the implications of not just creating a smarter computer, or new life, but an entirely new and unprecedented class of life?
For one thing, if they're so great, why bother keeping us around?
There are plenty of organisms on the planet that co-exist with more powerful ones and they don't attempt to wipe out the other. There would need to be such fierce competition over natural [or unnatural] resources that only one of the two organisms could exist, and then competition would give rise to a potential outcome of one extinguishing the other. Your question is simply an irrational fear, probably on the order of probability much less likely than us being wiped out by an asteroid.
The Animatrix basically did what you suggested and said the first robots lived in a plot of middle eastern desert set aside for them. After a hundred years their overwhelming superiority made them the dominant economy and global superpower, and as people's living standards began to decline, they fought back.
Um what? Species drive others to extinction all the time. Humans are no exception. We only preserve some species because we care and think nature is cool, but an AI doesn't need to have any such value. It would likely turn the earth into a dyson sphere to maximize it's energy intake, without giving the things on the Earth a second thought. Why would it? AIs don't have empathy or morality.
Plenty of biological organisms. That have niches in the biosphere. But this new life is/has neither. It could decide to cannibalize the earth to fuel a spacefaring civilization for instance. Not an irrational fear.
Non-organic organisms would still require the biosphere to self-propagate on earth. Currently they need organic and inorganic matter to operate. Assuming they stayed purely mechanical constructs, they would need the byproducts of the earth to maintain themselves, and there are limits to those byproducts and the rate at which they can be produced, not to mention the rest of the biosphere that supports their production.
If they somehow evolved past mechanical constructs towards more chemical/biological machines, or some sort of electro-mechanical construct that wasn't constrained by a physical/machine form, they would, as far as i'm aware, still require the biosphere to propagate or at the very least interact with us.
Even if they did want to fuel a spacefaring civilization (why???), why would they cannibalize the earth when they can already travel to planets, moons, asteroids, etc which have more of the components they actually need to function? Unlimited access to solar power, more heavy metals, and aren't restricted to the limitations and detriments of our biosphere (oxidation, gravity, ionosphere, electrically conductive material falling on you all the time, etc). On top of all that, there'd be these damn adaptable irrational homicidal monkeys that fear anything that could threaten them, and many things that don't. It would behoove them to get the hell away from Earth as fast as possible.
--
Anyway, all this assumes that somehow they could adapt faster than humans. I know of no plausible timeframe that anyone can yet estimate when this would happen. Kurzweil says 2045, but this makes no sense when you look at our actual progress over the past 50 years.
We still travel on Boeing 747s fueled by million-year-old decomposed plant juice. The richest nation in the world has one of the least advanced train systems in the world that only covers a fraction of the country. And one of the leading causes of death are the heavy speeding metal boxes we manually pilot through free space that are insanely less efficient than any other modern mode of transportation, purely because we like cars the most.
Predictions of technological advancement over 15 years in the future don't work because human progress is not based on capability. Much like your fear of a fate you can't be sure could even happen, our future is not based on what could happen, but what we make happen - our will.
Just as it is "possible" for us to birth a matricidal organism, it's just as possible for us to willingly never create such an organism. And since we imagine this supposed event would end in our own destruction, and we are [in general] on the side of self-preservation, it's not likely we would willingly pursue this event. It would be more likely if the idea never occurred to us.
--
Great civilizations have existed several times with advanced science, then fell, and took several millennia just for others to get back to where they had been. In terms of what is actually a threat to ours today, climate change and an unstable political/socioeconomic climate actually will kill us in a few decades if left unchecked. I suggest we focus on the immediate threats before we sweat the theoretical and philosophical ones.
You said it yourself: either get away from the homicidal monkeys, or hey! just get rid of the monkeys. Anyway its a given that mechanical constructs can adapt dramatically faster than biological - they just reprogram themselves.
Because we don't know the probability these things would be 'matricidal' does in no way justfity a 50-50 statistic. I'd say, we must be very, very careful not to. Its not at all clear what the long-term goals of a device designed to be our slave would be. Of course nobody would intentionally design it to be a threat to our civilization. The entire notion is, that our society is actually not considering long-term results of anything else; this would be just like all the rest of our inventions: agriculture, fracking, indiscriminate travel and population mixing (providing unlimited disease vectors), politics, money and on and on.
I've come across some curiosity driven skill acquisition research for robots that is similar to this. Below is a link to one of the articles and more related articles can be found from this author.
In 'The Matrix' they had developed programs that could be uploaded to a physical brain to essentially pre-wire the brain with complex synaptic connections (or so I imagine was the effect). That would be a lot more efficient than waiting 3 years just to get to the point of trimming useless connections. We may have to 'grow' a virtual brain via training to develop the basic platform, but then pre-load it with operations to save time.
If that were to happen, I would suggest that an artificial "soul" could or would be created. Once you have that on your hands, things being to get interesting. Ethics, religion law, and so on will have one hell of a job on their hands. Especially when you consider that there would be immediate and obvious military applications.
What does that have to do with a soul, whether one exists, or whether we care?
A powerful and incensed President could order a nuclear strike that could end civilization as we know it. Why the concern over a theoretical intelligence that could theoretically have emotion and theoretically have so much power it could theoretically do the same?
It's hard work to become president. It is much easier to build software that can be hacked to cause far greater devastation than things can be built. Presumably an AI can hack better than humans.
It doesn't matter what we define as a soul, it is the relationship humanity will have with artificial intelligences that matters. If we don't ask these questions, our children will wish we had...if they're still around.
No. You are comparing something that we know is possible, to something where we have no idea if/when it will be possible. So for all we know, it's infinitely easier to become a president. There have been 43 individuals in history who have become US president. There are 0 individuals in history who have built devastating AI.
We can't know whether animals have a soul any more conclusively than we can know that about ourselves. But it's arguable that one is not less probable than the other.
That's how perceive things. It's judged by us as the best way to secure our own interests. It can be seen as a pact between intelligent creatures to shown each other mutual respect.
You can define general artificial intelligence in many ways. This is probably not comprehensive, but "an artificially constructed agent that is capable of solving a wide variety of tasks in a wide variety of different environments, doing this to successfully achieve some stated goal" should cover it.
Add a clause for mostly being at least as capable of doing this as an average human, if you want to be sure. At the moment this is closer to a philosophical idea than an actual scientific possibility, but science usually makes its biggest advances at the edge of philosophy. This is also where we can get an idea of the high-level problems that need to be solved. Think the idea of manned space exploration before the advent of rocketry, for instance.
I dunno, there are plenty of humans that wouldn't make it to AI classification by that measurement, so I'm a skeptical to say the least. I don't think any of us are going to see past the uncanny valley of human-computer interaction—computers will have their tasks, but they'll be just that—task solvers. Why make it more magical than that? It's a waste of money to try, too—much of what makes humans unique isn't really useful in a business context (like giving up on problems because you're bored or hungry or having desires).
That is actually one of the more useful things a computer can do. It is also one of the hardest to program (see the halting problem). It is very useful for a computer for a computer to be able to do that when it has to pick out millions of likely candidates from quadrillions of possible answers.
>but they'll be just that—task solvers. Why make it more magical than that?
So you're in the 'people are magic' camp. You'll be very disappointed when you learn that people are just billions of years of bio-chemistry left to run rampant and that electro-mechanics and reproduce the same thing.
...how are you extrapolating that from anything I said? All I mean is that "AI" is inherently meaningless. When you ground it in reality there are more useful terms that are meaningful.
Also, I meant to point out that large parts of the brain's chemistry are not useful to emulate, leading me to question using humans as a base for "intelligence" comparison
But even humans can't always learn new things—at least not well.
It seems to come down to being able to interpret language in a way that is broadly within the range of a human's ability to agree with another human on the meaning of a phrase.
Are physics engines not yet accurate enough to enable "virtual" pre-training / full training of the networks, lighting conditions, etc? If they are, exclusively using physical robots seems somewhat inefficient.
Their system evolves a virtual body which is evaluated by comparing its predicted behaviour (e.g. if motor A is rotated by X degrees, sensor B should get response Y) to real physical movements (moving motor A and reading sensor B). Once an accurate virtual body has been made, it's used to evaluate a bunch of (again, evolved) movement styles in simulation. Once an efficient style has been found, it's used to control the physical motors on the robot.
Hmmm... does anyone know if Grand Theft Auto has an API ? I would like to pre-train my autonomous vehicle controller before connecting it to an actual car.
Ideally, yes, we want to pre-train in a virtual environment using as close to the real model robot as possible. I worked on such a problem as part of my PhD research on mobile robots using the Webots simulator (https://www.cyberbotics.com/overview) as my virtual environment.
In my case, I was working on biologically-inspired models for picking up distant objects. It's impractical to tune hyperparameters in hardware, so you need to be able to create a virtual version that gets you close enough. Once you can demonstrate success there, you then have to move to the physical robot, which introduces several additional challenges: 1) imperfections in your actual hardware behavior vs idealized simulated ones, 2) real-world sensor noise and constraints, 3) dealing with real-world timing and inputs instead of a clean, lock-step simulated environment, 4) having different API to poll sensors/actuate servos between virtual and hardware robots, and 5) ensuring that your trained model can be transferred effectively between your virtual and hardware robot control system.
I was able to solve these issues for my particular constrained research use case, and was pretty happy with the results. You can see a demo reel of the robot here: https://www.youtube.com/watch?v=EoIXFKVGaXw
That's a very interesting question. My guess is that the physics of grabbing things, especially non-rigid things, is very messy and difficult to simulate. It would be great if someone here were able to give a detailed answer to this question though.
1. The best / most recent attempt at this was for the DARPA robotics challenge and the Gazebo simulator.
This was still very buggy and prone to hilarious / depressing physics.
2. Almost all game physics engines start from rigid body and slap on particles, deformables, etc.
An exciting counter example to this is nVidia Flex which starts with unified particle simulation (much closer to molecular dynamics simulation used for, you know, real work).
3. From the perspective of AI, accurate simulation might not be required.
Intelligence requires complexity and a certain degree of predictability. So as long as you can build a rich and consistent / learnable world then whatever simulation you have could be super useful.
From the perspective of transferring that knowledge into a robot though you need accurate physics.
4. Natural touch sensors are hard to do in rigid body simulators but are super important to naturalistic learning.
There's a ton of information that your sense of touch and body position provide about how the world works, and getting the tens of thousands of soft-contact touch points simulated you need for this kind of sensing is pretty challenging today.
Lots of physics engines do all sorts of things to minimize contact points, or ignore them if there's no motion. You have to work against optimization a lot if you want mechanoreceptors and proprioception.
I agree with all the points you made but in addition I would add another - with external cameras for positioning and movement feedback, you don't need to have accurate geartrains or encoders nor have a rigid robot. Since the localization is all in software (and software is scalable/free from Google's standpoint) there are potentials for lots of weight and cost savings on the hardware side.
Kind of like my robot:
https://github.com/jonnycowboy/YARRM
I was hoping to use a robotic arm for a project I'm working on, and wondering if you guys could answer a question about motors. In my very limited research it looked like one of the factors that make the industrial (kuka, etc) robots so expensive was that they use backlash-free motors. What does that even mean?
I also saw a couple startups aimed at sub-$5k robots (like carbon.ai). Are they solving this problem in some novel way?
Backlash free motors are motors where the output shaft begins moving as soon as the motor starts moving. In particular, when the motor reverses direction there is no "slack" to pick up before the output shaft starts to move. The slack is called backlash when talking about gears and motors and what have you.
It's important for robots to not have backlash, because as movements are repeated, each bit of backlash adds up into a potentially big cumulative error. It could end up with the robot operating outside of the intended design envelope, which might be a safety problem.
We're getting outside of my home hobbyist experience here, but I think you could guarantee the robot would be in a particular position, but the backlash might make it hard to say when the robot will get to the particular position. Using encoders on all the motors would require having inputs for each encoder, which can get complex.
My guess is you can go pretty far with janky parts if you don't run for long periods of time and also measure where they are.
I think you mean backlash-free gearboxes (ie: cable driven gearboxes, harmonic drives or spring-loaded gearboxes that always apply a minimal but constant tension).
It is difficult, but doable assuming Coulomb friction.
The two main issues are that it is computationally expensive, but also that your mechanical modeling has to closely match that of the actual robot (especially the contact model), otherwise I suspect the training data will be useless in the end.
So if you can afford an actual robot, it makes sense to do the training using it.
How computationally expensive ? Are we talking supercomputer time to simulate the few seconds it takes to grab an object ? Advanced robots are expensive too and usually much harder to get access to then computational ressources (ie. AWS).
It depends: rigid/non-rigid objects, stiffness for non-rigid objects, approximate/exact Coulomb model, spatial/ temporal resolutions, and solution precision.
On a typical desktop computer, that would probably range from real time for the fast/imprecise simulation, to maybe one day for a full-blown simulation.
But again, most roboticists will tell you there is a world between the simulation (even an accurate one) and the actual robot.
Gazebo with Mike Sherman's physics engine might be good enough. DARPA paid to get a decent physics engine into Gazebo; the ones from games were never quite right.
There are things you can't simulate (yet). In my experience it's beneficial to run real live testing to gather data about individual parts themselves. For example, I had a robot's navigation fail when it encountered a certain type of water container (one gallon type in a given color found in US supermarkets). Like kissing, you can't replace the real thing.
This is the bin-picking problem, which has been worked on since the 1980s. For objects of known shape, it's more or less solved.[1] The general case is still a problem. It's good to see Google making progress with this.