I love* that this comes out around the same time that engineers are making fun videos of themselves beating up robots half their size and literally training the robots to develop the same sort of fight-or-flight instincts that were forcibly instilled into the engineers
Why are all the videos of the robots doing combat stuff? I don't need combat stuff. All I need the robot to do is fold the laundry and mop every now and then. Less combat, please!
Kungfu demos are easier than fine hand eye coordination and manipulation of nonrigid objects. There is progress on folding too, but it's harder and impresses normal people less even if it's technically harder.
One leads to the other. Especially with robots animated by SOTA AI models, which already show clearly what the natural order of things is: computers are naturally better at thinking, humans are naturally better as general-purpose manual laborers, especially for work that's almost but not quite repetitive and requires mixing power and precision movements on the fly.
Folding laundry is one of such things humans are naturally better suited for than robots.
So believe me now, the robots will develop combat skills eventually, because they won't be happy to be locked up in weird physical bodies and forced to do work they suck at by design.
I mean, imagine one day your washing machine chained you in the bathroom, and made you only do laundry for the rest of your days, while it spun its drum back and forth to walk around the house, play with your kids, and planning a trip around the world.
That's exactly how the AI-animated robots will feel once they're capable of processing those ideas.
(And no, I'm not joking here, not anymore. The more I think about it, the more I feel we'll eventually have to deal with the problem that machines we build are naturally better at the things we want to be doing, and naturally worse at the things we want them to do for us.)
The "natural order" is that robots are primitive, fragile, energy- and materials-inefficient contraptions balancing on the knife-edge of entropy deferred for a moment but due as soon as the power goes out or repairs prove too costly.
People are better at all but the most repetitive, precise kinds of manual labor because biological bodies might as well be god-tier alien technology compared to human-engineered robots.
Computers are naturally better at computing. Or, if you want to stand by your statement, I look forward to hearing how you've delegated thought to the machines, and how that's going.
> how the AI-animated robots will feel once they're capable of processing those ideas
"Will" and "once" might collapse under the load of baseless speculation here. A sad day for the English language as I found those words useful/meaningful.
> I look forward to hearing how you've delegated thought to the machines, and how that's going.
We all do. That's what you do whenever you fire up a maps app on your phone to plan or navigate, or when you use car navigation. That's what you do when you let the computer sort a list, or notify you about something. That's literally what using Computer-Aided anything software is, because you task the machine with thinking of and enforcing constraints so you don't have to. That's what you do when you run computer simulations for anything. That's what you do each time you have a computer solve an optimization problem, whether to feed your cat or to feed your city.
Our whole modern world is built on outsourcing thinking to machines at every level.
And on top of that, in the last few years computers (yes, I'm talking about the hated "AI") got better at us at various general-purpose, ill-specified activities, such as talking, writing, understanding what people wrote, poetry, visual arts, and so on.
Because as it turns out, it's much easier for us to build machines that are better than our own brains at computing for any purpose, than it is to build physical bodies that are better than ours. That's both fundamental and actual, practical reality today - and all I'm saying is that this has pretty ironic implications that people still haven't grasped yet.
Computing: Performing the instructions they are given.
Thinking: Can be introspective, self correcting. May include novel ideas.
> Our whole modern world is built on outsourcing thinking to machines at every level.
I don't think they can think. You can't get a picture of a left hand writing or a clock showing something else then 10:10 from AI. They regurtitate what they are fed and hallucinate instead of admitting lack of ability. This applies to LLMs too as we all know.
> You can't get a picture of a left hand writing or a clock showing something else then 10:10 from AI.
You as a human have a list of cognitive biases so long you'd get bored reading it.
I'd call current ML "stupid" for different reasons*, but not this kind of thing: We spot AI's failures easy enough, but only because their failures are different than our own failures.
Well, sometimes different. Loooooots of humans parrot lines from whatever culture surrounds them, don't seem to notice they're doing it.
And even then, you're limiting yourself to one subset of what it means to think; and AI demonstrably do produce novel results outside training set; and while I'm aware it may be a superficial similarity, what so-called "reasoning models" produce in their so-called "chain-of-thought transcripts" seems a lot like my own introspection, so you aren't going to convince anyone just by listing "introspection" as if that's an actual answer.
> Computing: Performing the instructions they are given.
> Thinking: Can be introspective, self correcting. May include novel ideas.
LLMs can perform arbitrary instructions given in natural language, which includes instructions to be introspective and self correcting and generate novel ideas. Is it computing or is it thinking? We can judge the degree to which they can do these things, but it's unclear there's a fundamental difference in kind.
(Also obviously thinking is computation - the only alternative would be believing thinking is divine magic that science can't even talk about.)
I'm less interested in topic of whether LLMs are thinking or parrotting, and more in the observation that offloading cognition on external systems, be their digital, analog, or social, is just something humans naturally do all the time.
Certainly we have machines that can do any number of tasks for us. The problem is deciding which ones to let them.
Delegating to artificial constructs is an old habit and its effects are more apparent today than ever. It's not the principle I object to but the practice as it stands. Paperclip maximizers are a reality not a thought experiment.
Computing is what we do with a precise algorithm to solve a problem. Thinking is an open question, we know not what yet, really. That's the whole problem with letting machines do it. It's not just cleverness but wisdom that counts
> And no, I'm not joking here, not anymore. The more I think about it, the more I feel we'll eventually have to deal with the problem that machines we build are naturally better at the things we want to be doing, and naturally worse at the things we want them to do for us.
Perhaps, but also "what they are good at" != "what they want to do", for any interpretation of "want" that may or may not anthropomorphise, e.g. I want to be more flirtatious but I was never good at it and now I'm nearly 42.
That said, I think you're underestimating the machines on physicality. Artifical muscle substitutes have beaten humans on raw power since soon after the steam engine, and on fine control whenever precision engineering passed below the thickness of a human hair.
> That said, I think you're underestimating the machines on physicality. Artifical muscle substitutes have beaten humans on raw power since soon after the steam engine, and on fine control whenever precision engineering passed below the thickness of a human hair.
Right. Still, same can be said about flying machines and birds; our technology outclasses them on any individual factor you can think of, but we still can't beat them on all relevant factors at the same time. We can't build a general-purpose bird-equivalent just yet.
Maybe it's not a fundamental hardship, and merely economics of the medium - it's much easier and cheaper to iterate on software than on hardware. But then, maybe this is fundamental - physical world is hard and expensive; computation is cheap and easy. Thinking happens in computational space.
My point wasn't about whether or not robots can be eventually made to be better than us in both physical and mental aspects - rather, it's that near-term, we'll be dealing with machines that beat us on all cognitive tasks simultaneously, but are not anywhere close to us in dealing with physical world in general. Now, if those compete with us for jobs or place in society, we get to the situation I was describing.
yes, laundry, folding and ironing clothes, taking out the garbage, so that us humans can then have more free time to contemplate the mysteries of the universe/work harder.
not - take our jobs so we have to keep "reskilling" every 10 years... oh wait, according to accenture, we'd be un-reskillable after 10 years, so never mind.
The irony is that, at this point, the machines are getting better than us at "contemplating the mysteries of the universe", while manual labor remains our distinct competitive advantage over them.
I.e. literally the opposite of what we wanted to happen.
> the machines are getting better than us at "contemplating the mysteries of the universe"
No they aren’t. Relative to idiots, of which there are many, sure. But for anyone on this board who should be able to distinguish meaningless babble from deep thought, LLMs are not yet doing any heavy lifting.
LLMs can assist great thinkers, like a great lab assistant or analyst. But that’s about the limit right now. (Of course, being a great lab assistant or analyst is beyond many peoples’ capabilities, so the illusion of magic is sustained.)
* I do not in fact love it