Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
I invented Roomba and assure you, robots won’t take over the world (nautil.us)
78 points by dnetesn on May 8, 2020 | hide | past | favorite | 61 comments


There is no point made here that robots won't take over the world, he just belabors the point that it is hard to make robots replicate exactly what humans do.

Automation will eat everything, it's just a matter of time. Even his problem of 'visually identifying weeds and uprooting them' does not come across as a massive problem, but one that can probably be solved with enough time and demand.

Platitudes like these give false hopes to workers who should be training and preparing for the next phase of industry, and IMO serves people like the author who need complacent workers to give them time to come up with their money-making solutions.


I think this underestimates the incredible engineering of the human body. It runs at 100W, while retaining the ability to execute both very high torque actions and fine motor actions, with self-healing grip, and an ability to engage increasing numbers of muscles and the weight of the machine itself to accomplish tasks.

One thing I didn't quite realized needed teaching until I taught my daughter: how do you tell when a screw is going in right? It might have resistance because of gunk or rust, or it might be misaligned. Determining the right action is incredibly sophisticated, the difference between the slight easing of resistance with old oil or the springiness when the threads are pushing against each other and deforming. In response you might draw out and try again, or push harder, all while still trying to feel out what's really happening. Automation is great for screws in factories.

We don't have anything on the horizon that has the physical flexibility and power of a human. To the degree we can reconfigure problems to match a robot's capabilities, then yes, automation may be inevitable. A lot of problems aren't like that.


Along these lines, I still think Marshall Brain is likely to be proved correct, he predicted automation would instead replace middle management, and then direct unskilled labor in his novel Manna[1].

1:http://marshallbrain.com/manna.htm


Manna is a very underrated story. And it's already happening to some degree: Uber/Lyft drivers don't have managers; they are being directed by a phone app.


Taxi apps don't replace middle managers, they replace dispatchers.


dispatchers are the middle managers.

but i'm skeptical; it would be more convincing if GP/GGP provided some of the rationale for why middle managers would be automated away rather than simply referencing a book.


Ha, to encourage more reading, naturally! But I'll bite, and flesh out the premise a little more - like the Uber example, in Manna the story begins with software being used to micromanage the employees in fast food joints, through their headsets. It tracks their locations, and walks them through the steps of doing any tasks it predicts will require attention wherever they happen to be - in the restroom? Does the trash need to be emptied y/n? If yes, step one, remove bag and tie closed... etc. Making robots capable of doing all those different tasks would be hard, but making a system to tell low-paid humans to do them is not so difficult.


ok, i'm still skeptical then. dystopian novels typically feature (overt or covert) coercion centrally because it's required to force people to do things they'd not naturally do (the dystopian stuff).

those kinds of jobs would have to be coercive in nature (desperation qualifies), and would be beyond the threshold of autonomy that most people will tolerate. most jobs, even those rigidly regulated, have enough latitude that workers can find some level of esteem in it, enough to create the intrinsic motivation to keep doing it.

people wouldn't tolerate such jobs without extreme coercion, implying nothing less than a collapse of our democratic republic to achieve.


Wouldn’t a robot be capable of seeing and measuring if the screw is misaligned? There are more than one way to ensure that a screw goes in correctly. You are compensating for your inaccuracy by using sense of touch and kinaesthetic sense


The fact aside that you just brought up another platitude, actual data suggests that this is wrong. Automation is slow, has been getting slower as is clearly visible in labour productivity and employment statistics and there's really no indication of a large change in that trend.

Billions of dollars are invested annually in technologies with marginal productivity improvements, and it looks like there are more and more diminishing returns.

If there was any indication that automation would 'eat everything', unemployment would be high (corona-pandemic aside), and labour productivity would be growing fast, none of which is happening.

Robert Gordon gives an expansive, evidence-based account of the development of American economic development in The Rise and Fall of American Growth: The U.S. Standard of Living Since the Civil War, and there's virtually no evidence for the grandiose claims made by the tech industry that we're living in some sort of era of rapid change. That for the most part ended in the late 70s.


FWIW, automation is certainly eating jobs in the back office. I push a button and my pivot table gets updated in a few seconds whereas that used to take somebody a couple hours to assemble. Why do I need a second staff accountant, again?

I think it's inevitable that the "junior" office jobs (where many get their start) will be/are replaced by scripts & API's.


that's just productivity enhancement. in most cases, junior positions aren't disappearing, but rather being redefined to do things that were previously too difficult or time-consuming, so were skipped.

like fraud detection. before, you just accepted a 3% fraud rate as a cost of doing business and left it at that. now you get that down to 0.5% because the tools let you do it in the same amount of time, and the owners want that sweet, sweet cash. you still want a second accountant to double-check the first, so you expand the work to fulfill that need. it can still be worth it, even so.


Automation will not eat everything in time, but from my work on unmanned shipping, I'm pretty sure that autonomy + automation will.

Automation - handles deterministic situations.

Autonomy - handles uncertain situations.

In my industry (ship design, construction and operation), based on what I'm seeing and have seen evolve over the past 20 years, it feels likely that this paradigm shift to a majority of ships being autonomous will probably happen in the next 50 years. This is my gut feel, not a scientific conclusion, so take that for what it's worth. I also don't see why other industries will be any different. For my money, reinforcement learning (in some form) appears to be the key enabler.


Totally off-topic, but how viable would be to build a slim circle-shaped ship? Is it absurd, could it be submarine-like in case of bad sea weather and go underwater to avoid high waves impacting on it?


tl;dr - yes, you can make a surface ship submersible, but it's done at the expense of typically increased construction cost and reduced dynamic performance when operating in either domain (i.e. surface or submerged).

I'm not sure what you mean by 'circlular', but I think it's safe to assume you mean a circular hull cross section vice a round circular deck (like a floating helo pad or SpaceX launch barge).

First, there have been some examples of surface ships that transition to submarines (e.g. 1). China is apparently working on a large Surface ship/submarine for their Navy.

Surface and submersible vessels have very different hydrodynamic and hydrostatic properties and forces to contend with, which drive them to have different shapes. I actually did work on a recent project that explored this topic a bit. Generally, for any vessel, the shape depends on what you want the ship to do and hull form/type selection is very much a science-backed art form of Naval Architects.

You can certainly build a ship to fully submerge, the question is, particularly at larger scales, is it worth the added cost, risk, and hydrodynamic compromise - or are you better off just building two different vessels that are specifically designed to operate in one domain (I.e. on the surface or submerged). That answer is highly dependant on what you intend to use the vessel for. It may be worth it for your application, but in general the answer has been that it's more cost-effective to just optimize vessel design for one operational domain rather that two.

Regarding hydrodynamic trade, while both ships and subs strive to minimize total hydro resistance (drag), hydro resistance has subcomponents that affect vessels differently when operating on the surface versus submerged. Surface ship shapes strive to be hydrodynamically 'optimized' to minimize form, skin, wave (critically), and even air resistance, whereas submarines aren't generally subject to wave action or wind and thus typically focus on minimizing the dominant form and skin drag forces. Surface ships are generally more power efficient than submarines at equal displacement and speed through water.

If you want to make a ship submerge, you also need to add significant structure to withstand hydrostatic forces at depth. Submarines are inherently structurally sound enough to operate at the surface, but function hydrodynamically poor at the surface because of their shape.

Surface ships and submarine shapes primarily differ in structure due to the significantly different pressure forces they need to contend with. Submarines have a circular cross section because, next to a sphere, cylindrical hulls are the most structurally stable pressure hulls. Surface ships can actually have relatively structurally 'weak' main decks, operating almost like floating buckets that are mostly designed to stay right-side up. The weaker main deck makes the ships cheaper to build, but probably more critically, allows easy access to whatever payloads the vessel is carrying.

1. https://www.thedrive.com/the-war-zone/9726/the-usmc-is-inter...


I'm not sure productivity is measured correctly.

In general it's measured something like increase in $/employee.

But let's say you're a commercial printer. The industry is moving to new printers that print double the amount, per employee hour.

If prices stayed the same, you'd see double productivity.

But because every company has the new, fastest machine , prices greatly reduce, and stop reflecting the fact that a single employee can now do 2x the work.


>If prices stayed the same, you'd see double productivity. But because every company has the new, fastest machine , prices greatly reduce, and stop reflecting the fact that a single employee can now do 2x the work.

this phenomenon (price deflation) can exist but we can actually measure deflation/inflation, so there's no need for a hypothetical, and there is no widespread deflation, or put differently increase in consumer purchasing power due to increased production.

There's actually very limited sectors where this dynamic exists (think the costs of a GB of data), but healthcare, construction, education, infrastructure all have not been automated, and hence aren't any cheaper.

In fact there's actually a decline in producitivity in US construction, as decades of erosion and retiring of experienced workers leave the industry in a bad shape, the US has started to lose manufacturing knowledge.

Whatsapp messages and tweets may have gotten a lot cheaper, but it's worth remembering that 'tech' is only 10% of the US economy, employing 3-6 million people depending on how you count. The overwhelming majority of the physical economy and service sector is still practically untouched by it, despite many claims to the contrary.

Just physically you can walk into a retail store, which is the most common place of employment around the world. It's got a Square Stand on the desk maybe, but otherwise, not exactly a lot of things have changed. Same thing in the classroom or a hospital ward.


// There's actually very limited sectors where this dynamic exists (think the costs of a GB of data), but healthcare, construction, education, infrastructure all have not been automated, and hence aren't any cheaper.

That's true. But it's a bit more complicated.

In construction for example, there's the new prefab sector, which uses a lot of robots, growing nicely. It's a bit challenging to open such expensive factories with boom/bust cycles in construction, but they might figure it out.

As for infrastructure, we're starting to see companies working on robotic heavy equipment.

As for healthcare, that's an industry I don't understand. It's a very innovative industry. And some of the treatments and the medtech should have created efficiency gains, while not looking like robots. For example: curing hepatitis or cnc for dental crowns. But we don't seem to measure that efficiency. Weird.

As for retail, with e-commerce, some of our transactions have surely became more automated. And we chose to convert some if the efficiency gains, into comfort, to receive things at our door. But delivery robots might automate that part too.


> Automation will eat everything, it's just a matter of time

Yes and no, people are predicting that for 70 years, if not more. It seems that once you automated all the easy things it gets exponentially harder and more expensive to pursue automation.

> does not come across as a massive problem

Sounds like Musk telling you "self driving cars will be there in 6 months" every 6 months since 2013. Nothing is as easy as you think it is, even more so if you have 0 experience in the domain.

And then you have to solve all the societal issues. If everything truly gets automated at some point, who will benefit from it? how do you remunerate all the people who used to have low skill jobs? &c.

"Automation, which is both the most advanced sector of modern industry and the epitome of its practice, obliges the commodity system to resolve the following contradiction: The technological developments that objectively tend to eliminate work must at the same time preserve labor as a commodity, because labor is the only creator of commodities. The only way to prevent automation (or any other less extreme method of increasing labor productivity) from reducing society’s total necessary labor time is to create new jobs. To this end the reserve army of the unemployed is enlisted into the tertiary or “service” sector, reinforcing the troops responsible for distributing and glorifying the latest commodities; and in this it is serving a real need, in the sense that increasingly extensive campaigns are necessary to convince people to buy increasingly unnecessary commodities."

- Guy Debord, 1967


even in software development, even simple automation is very, very difficult. to automate a thing in the scenario that everything works is easy. to automate a thing in the more likely scenarios of everything not working is really challenging.


> Automation will eat everything, it's just a matter of time.

> Platitudes like these give false hopes to workers who should be training and preparing for the next phase of industry

These seem incompatible to me. If automation will eat "everything," what exactly is the next phase of industry that we should all be preparing for? What's left after everything has been eaten?

My resolution to this conundrum is: automation will not actually eat everything. It's not like automation is a new concept; we started automating things over 100 years ago, and contrary to fears back then and along the way, automation drove tremendous job creation. There are far more human beings with well-paying jobs (even in May 2020) than there were 100 hundred years ago.


Hah. Visually identifying weeds and uprooting them is exactly what I’m working on. I’m designing a farming robot for work. Funny that he says it’s a massive problem. At least in a commercial farming context it seems pretty tractable to me.


Any idea how effective farm.bot's weeder is? https://genesis.farm.bot/docs/weeder


I’m not sure! We’ve thought about getting a farmbot but haven’t done it.


There are a lot of jobs that are suitable for people with disabilities or lack of college education or other issues -- those jobs jobs are being taken over by "software robots". My employer and lots of other similar companies are using them to automate stuff so that those people are no longer needed.

Robots are not taking over your world -- just some jobs which mean the world to some people who have those jobs!


Why do some things get "invented" and others being simply developed ? Roombe is based on other ideas and concpets.

Even according to the timeline in the article itself the Swedish Electrolux Trilobite [1] was already press released when iRobot had their first proof of concept.

[1] https://en.wikipedia.org/wiki/Electrolux_Trilobite


Before spreadsheet software (Excel, etc) showed up, some people were using actual physical spreadsheets (very large pieces of paper). Spreadsheet software appeared to make that job obsolete. And indeed it did. Nobody uses the paper spreadsheets anymore. But how many people use the computer spreadsheets? Hundreds of millions?

It's going to be the same with robots. Maybe actual lathes will disappear, but there will be computerized things that will perform that job, and people manning the computers. Some of the jobs will be automated (think Excel macros), but many will require human interaction and iterations.


I've been trying to make Buckbee's Law happen for a while:

"The more a robot looks like a robot the less useful it is."

Take fast food as an example: what's really going to change things are automated systems that reduce the number of people needed to run the place. Look at things like touchscreen ordering systems that cut the number of people talking to you. That's not a "robot" but it is automation of a sort.


These aren't really much of an improvement in efficiency though - self-checkout kiosks for stores have been around since at least the early 2000s, and they're still much slower than minimum-wage cashiers with a little experience (maybe Amazon will finally change this, but the tech my local grocery has is definitely as bad as the kiosk I worked alongside in 2003). It takes me much longer to order using a kiosk at a McDonald's than it takes me to tell a cashier what I want. I remember using a touch-screen kiosk at a Subway back in 2005, and my local subway 15 years later still doesn't have them. This ultra rise in productivity just isn't actually happening despite what conversations on hacker news (and before that, digg, and before that, slashdot) keep insisting on.


There are other types of efficiency than time.

At my supermarket, they replaced 2 regular checkouts with 8 self-checkouts in the same floor space. During rush hour, this greatly increases throughput despite actual scanning time perhaps being slower.

At McDonalds, it means they can higher fewer staff which means a cost efficiency.


The kiosk isn't the productivity increase. The app I order 10 minutes before I get there and the foods already finished is.


When someone talks about robots destroying the mankind, I start picturing scenes from movies or tv series (i.e. Terminator, iRobot, Westworld so on) where human like robots try to destroy the mankind.

Robots taking over the world may not be eminent. However, greedy people taking over the world with the help of robots and AI is already being facilitated for quite some time. Governments acquiring technology to hush the oppositions and free speech around the world is a common phenomena around the world. Usage of data along with AI to pinpoint, target and destroy enemies is being used for over a decade.

It is the usage of data and AI by the greedy community we need to be more concerned about, not silly imitation of life that barely perceives the world.


I'm not worried about a grad-school project picking up a gun and using it against me.

I am worried about something like this, which, while a parody, is not long off:

https://www.popularmechanics.com/technology/robots/a29610393...


Something like this terrifies me- https://www.youtube.com/watch?v=TlO2gcs1YvM

Tl;dw, mini weaponized AI drones get loose and mass havoc ensues. I'm sure the feasibility behind these isn't close but still, it's an interesting design fiction.


shiver Very well done short film. A chilling vision of the (possibly near) future.


Roomba is a very old concept, missing almost everything that makes modern robots interesting. No vision; not much memory; no model of the environment; no knowledge of physics. Just a tuned mechanism and simple algorithm to wander around an barely pick up some dirt.

Don't get me wrong; its terribly useful. I own a knockoff. But not a model of what robots could be, today. Think of facial-recognition drones delivering voice messages to people in a crowd. Think auto-drive cars and the terabytes of data and kilowatts of energy they command. Think of the Netflix algorithm for suggesting movies. Lightyears beyond Roomba.

Yes, automation will eat our lunch. Because automation is eating our lunch. Factories, city planning, scheduling, diagnosing - all sorts of automated processes are in industry and consumer space right now. Just ask Siri about it if you still have doubts.


I wouldn't call Roomba old fashioned. It's based on emergent behavior instead of forecasting and complex rules.

I haven't touched this area since 2016, but I remember coding swarms of of robots that could manage to do very complex things from simple rules. In addition, they would be quite robust as they made very few assumptions about their environment / problem space.


Actually Roombas for a while already have vision and model/map their environment.


Ok. But they have tiny little processors (5?) that each do a tiny little task with a smidgen of RAM. Is the 'vision' feature real, or just a light intensity detector? Can you get a picture out of it? I'm dubious.

As for the model/map, how does it sense location? Dead reckoning? Not a very significant feature if it can't find itself on the map, nor use the map to choose a route. Which I doubt (from videos of the Roomba in action)


It depends on the model, but I think that your impression of Roomba, either through experience or the technology that the author of this article presented is really out-of-date. Even on past models, they have Visual Simultaneous Location and Mapping (VSLAM) tech that is far, far beyond dead reckoning. The even more recent version (not sure if there's been a release since) also has persistent maps, so it actually can find itself on the map and choose routes[2]. I'm not sure what exactly the processors are, but considering the supply-chain investments that've gone into mobile processor development for smartphones, I think it's likely unfair to characterize them as "tiny little processors".

Disclosure: I did an internship at iRobot a few years ago, although the overwhelming majority of my work wasn't related to Roomba products.

[1] https://spectrum.ieee.org/automaton/robotics/home-robots/iro... [2] https://spectrum.ieee.org/automaton/robotics/home-robots/new...


That's a big name. I'm suspicious it means "light intensity map" or something far less interesting than it sounds. Turn a light on or off, and then what? My Shark (ok Not a Roomba) manual says to leave the lights on. I suspect that's because its navigating by light intensity (light fixture bright spots).


I don't like to be dismissive, but your suspicion isn't well-founded and is very likely based on little or no background research on the topic. If you had, you'd have found that SLAM and VSLAM as a subset is a well-defined category of computer with a strong mathematical background in optical flow and pattern matching on images[0]. You'd have found a number of iRobot patents still in their enforceable time period directly referencing the use of cameras in those algorithms[1][2]. You'd have found marketing materials describing how those systems work that directly correlates with the technology described in those patents[3]. That's based on no special previous knowledge of those materials on my part, just the top few results from a couple minutes of DDG and Google Scholar searches.

Implying that is just "a big name" that means "something far less interesting than it sounds" is an uninformed opinion. Plus, even [0], which is 15 years old directly refutes your comments about the effect of lighting. Lighting is helpful because it illuminates the scene that a camera is seeing, in the same way that it's easier to go on a night hike on during a full moon. We don't reference our position off the moon, it just makes it easier to see where we're walking.

[0] https://ieeexplore.ieee.org/document/1570091

[1] https://patents.google.com/patent/US9286810B2/en

[2] https://patents.google.com/patent/US9910444B2/en

[3] https://www.irobot.com/roomba/s-series


The processors in a Roomba (when I went to the plant a couple years ago) were all around $1. No room in a buck chip for that. Ok I see it uses DSPs which is a budget version of image processing. Looks for moving marks on the walls to gauge distance. Kind of a cut-rate lidar. About what I figured.


Which plant? Which Roomba?

According to the materials released by Qualcomm in 2018 (by which point, the Roomba i7 would have already have had lines in that plant), the processor used is a ARQ8009, a quad-core ARM processor commonly used in smartphones[0]. It's tough to find prices because the contracts are negotiated individually, but I'd put money on those chips costing much more than $1.00/piece.

Comparing VSLAM to lidar in any way besides saying that they technically can both be used to measure movement is also incorrect, and to actually call it a "kind of a cut-rate lidar" can only be due to a fundamental misunderstanding of VSLAM, lidar, or both.

Lidar measures distance along a single axis, with no additional information provided. You get movement by measuring changes in position by directly updating the distance value and inferring position change. It can be made robust by adding multiple lidar sensors, but is subject to weird edge cases with reflective materials or sunlight.

VSLAM only can measure change in vehicle position through the estimated position changes of significant image features through time, and provides no information about vehicle distance to the surroundings while the robot is standing still. You'd require stereo vision for that, and that does require more processing power, typically including a GPU like the one included in the NVIDIA Jetsons. You'd also typically use sensor fusion with an IMU, odometry, etc. to help with that.

Without stereo vision, you need motion to estimate position, which fundamentally makes VSLAM different from lidar, and makes your claim that it is "kind of a cut-rate lidar" completely false.

I'm not sure which DSP acronym you're referring to as a budget version of image processing , but if it has to do with digital signal processing, then yeah, of course. That's kind of the definition.

[0] https://www.qualcomm.com/news/onq/2018/12/17/help-qualcomm-t...


Jesus that's a world ground-speed record for pedantry right there.

Clearly, VSLAM is being touted as 'vision' when in fact its far less than that, less than LIDAR even, which was my point. Which I feel was completely understood, yet still a silly attempt to shame me was spewed out saying "But its less than LIDAR" which was true, what I wrote, and beside the point.


Oh, neat, looks like this debate has reached ad hominem attacks.

VSLAM is computer vision. Full stop. Here's an article about it in a Springer Computer Vision guide [0]. I don't know what your personal definition requires for that to be the case, but if it doesn't include point recognition and persistent tracking by an integrated RGB camera, then I think it's unreasonably out of line with that of the rest of the field.

Which, just so we're clear, is the eighth reputable primary source I've presented you with directly refuting claims you've made.

Yes, I understood your point that you feel VSLAM is less than lidar. My point is that it's like stating that the moon is purple. Comprehensible, but wrong. And not in a "beside the point" way, but in a way that directly laid out why a core piece of your argument regarding how you try to compare VSLAM with lidar is invalid. Pretty sure that's not pedantry.

[0] https://link.springer.com/referenceworkentry/10.1007%2F978-0...


Ok, when Roomba advertises 'computer vision' they're conjuring up images of recognizing rooms, finding furniture and pets, maybe building a model of your house. But what that DSP is delivering? Maybe a little feature-extraction, just so it can calculate relative motion/distance of a wall or obstacle. Calculated and forgotten.

In the scheme of computer vision sophistication, its right down there with a fly's eye. That was my point, and I believe it was well understood. The pedantry of insisting it is computer vision full stop. Well, yes, about 1% of what computer vision could be. Niggling over that point is what's annoying. Instead of a good-faith discussion of how primitive this is, how tiny a feature it is, how miniscule the benefit it brings the consumer. How its pasted on the Roomba feature list because, hey, cameras are cheap and we can sell that feature for maybe $100 because the consumer doesn't know.


Except it does do that. It's called persistent maps, and it's how the feature where you can say "Go vacuum the kitchen" works[0].

Also, I think you're underestimating both how valuable and complicated VSLAM is. Sure, it's not a multi-billion dollar state-of-the-art composite deep learning model trained on thousands of hours of data running on custom silicon, but it's hardly "This area is dark, and there's a bright thing over there!" Even if VSLAM only included visual odometry, that would still be a major improvement over other options in autonomous navigation, which is absolutely key in other domains that you seem to consider state-of-the-art. It's the same technology, and investments therein that make things like the RangerBot possible[1]. In a competitive market where BOM cost is a serious consideration, if you could get better results by not using VSLAM and relying only on something like laser odometry instead, that's what would be done.

Also, I'm honestly a little annoyed at your implication that I'm the one not having a good-faith discussion, while you have blatantly and repeatedly made false statements that could have been figured out by doing your own research, notable by your failure to provide any actual sources, instead making tangential anecdotal references to try to back up your statements. Heck, let's not forget your initial claim wasn't "limited vision" (which I debate anyway), but "no vision", and "no model of their environment" which again isn't simply true.

[0] https://store.irobot.com/default/roomba-vacuuming-robot-vacu...

[1] https://good-design.org/projects/rangerbot/


Roomba usually runs during the day when lights are off and all windows are darkened. If I open windows/turn on lights - there is no difference. Also you can draw on the map (that roomba makes) "no-go" area, and it will obey it.


So OK, it's a good criteria for robotics companies: Do a valuable task, do it today, do it for less. And companies following that, and i'll add - aiming for B2B, even when failed, we'll see their competitors and industry succeed.

You can see it from his list of failed companies, where only Unbounded robotics(smart robot hand), Rethink(collaborative robotics), Blue workforce(collaborative), Aria insights(drones) have seen their work continue successfully in their industry.

But yes, robots for the home are much harder.

So maybe robots won't take over the world, but they might take over businesses.


I think it's a bit easier for businesses to reconfigure to change the essential job and make it easier for robots. E.g., a business can put guide wires in the concrete. I can do the same in my yard... but I'm not going to do it in my house.

Perhaps this is because the business is already slave to its function. The exact separation and makeup of businesses reflects efficient ways to split up problems and integrate with other businesses. It's possible to refactor that given a robot that can perform a certain kind of action that is not directly equivalent to one person's job right now.

But my home is my home, it serves me, I do not serve it. For instance, one could imagine redesigning clothing to make robotic cleaning and folding of clothing feasible. I'm not going to do that! But redesigning PPE for that process in a hospital is not at all unreasonable.


It doesn't answer the title? Why won't they take over? I thought he was going to discuss the exponential difficulty of AI as it approaches human parity.


If I want to know what robotics will look like and the future of deep reinforcement learning, I'd rather ask Covariant.ai than a guy whose expertise is little vacuums whose AI is "while true, spin in a circle and go forward at random".


Yeesh, gwern. Usually I enjoy your comments, but this one seems a bit unfair. Of course if you want to know the long-term future of robotics, you should look to sci-fi authors and research prototypes. But if you want to know the short-term future (<10 years), you should listen to people with actual experience building actual products, even if you don't see those products as impressive from a research perspective.

I think the article provides an illustrative example of the different factors and complications that make it so difficult to go from a research prototype to an actual consumer product (a process which seems to take about 20 years on average). As usual, the lackadaisical improvement of battery technology is one of the central bottlenecks.

Of course, if your main interest is industrial robotics, where DRL approaches are more likely to become mainstream in the near-term, insights related to consumer products might not be as relevant. But note that industrial improvements, while important, will be mostly invisible to consumers except in terms of prices, unemployment and increased customization options for products.

Even then, the article can provide some perspective. Our processes (whether assembly lines or household vacuuming) are already highly optimized for a particular way of doing things, and researchers in particular tend to underestimate the amount of effort it takes to change systems to accommodate a slightly different method. So any new system not only has to be competitive with the existing one in just about every parameter, it has to justify the often huge costs of small alterations to the process. Thus, even in industry, getting to mainstream adoption of RL will likely take quite a while.


Current Roombas are a little more fancy than that.


But not much. You can hardly compare anything Roomba has ever done to even, say, Boston Dynamics.


The difference is that Roomba's are used by millions of people around the world whilst Boston Dynamics robots, whilst impressive, are currently used by... nobody?

Anyway, it's an apples to oranges comparison. Boston Dynamics are building incredibly expensive and sophisticated AI systems that will be first of interest to the military, who have very deep pockets and can afford to spend millions if it is deemed a good use of the money.

Roomba are building stuff that can be bought by the average person on the street.


That definitely clears up a long-standing doubt caused by a lesson in computational geometry - how does Roomba know when it is done cleaning a room? It seems that the simple answer is that it doesn't.


Yes you are right they even can't do this


[flagged]


Because designing a reliable product that's used by millions is so easy compared to duct-taping some research contraption together and cherry-picking to get your PhD.

What's with all the disrespect. I think people are reacting to the link-bait title, and not what was actually written. It's 2020. Realize that the title will be exaggerated and that responding to that is just a straw-man argument.


Oh c'mon ... Roomba is a PCA with a motor and basic sensors and what not. See comments above: nobody thinks Roomba confers distinction on the hardware or software side. So now what? We're gonna skip 10 magnitudes of 10 from simple to talking about "I Robot" material? No. It's used by millions of people for good old fashioned manufacturing reasons, and because it's cheap That's it




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: