Ok, when Roomba advertises 'computer vision' they're conjuring up images of recognizing rooms, finding furniture and pets, maybe building a model of your house. But what that DSP is delivering? Maybe a little feature-extraction, just so it can calculate relative motion/distance of a wall or obstacle. Calculated and forgotten.
In the scheme of computer vision sophistication, its right down there with a fly's eye. That was my point, and I believe it was well understood. The pedantry of insisting it is computer vision full stop. Well, yes, about 1% of what computer vision could be. Niggling over that point is what's annoying. Instead of a good-faith discussion of how primitive this is, how tiny a feature it is, how miniscule the benefit it brings the consumer. How its pasted on the Roomba feature list because, hey, cameras are cheap and we can sell that feature for maybe $100 because the consumer doesn't know.
Except it does do that. It's called persistent maps, and it's how the feature where you can say "Go vacuum the kitchen" works[0].
Also, I think you're underestimating both how valuable and complicated VSLAM is. Sure, it's not a multi-billion dollar state-of-the-art composite deep learning model trained on thousands of hours of data running on custom silicon, but it's hardly "This area is dark, and there's a bright thing over there!" Even if VSLAM only included visual odometry, that would still be a major improvement over other options in autonomous navigation, which is absolutely key in other domains that you seem to consider state-of-the-art. It's the same technology, and investments therein that make things like the RangerBot possible[1]. In a competitive market where BOM cost is a serious consideration, if you could get better results by not using VSLAM and relying only on something like laser odometry instead, that's what would be done.
Also, I'm honestly a little annoyed at your implication that I'm the one not having a good-faith discussion, while you have blatantly and repeatedly made false statements that could have been figured out by doing your own research, notable by your failure to provide any actual sources, instead making tangential anecdotal references to try to back up your statements. Heck, let's not forget your initial claim wasn't "limited vision" (which I debate anyway), but "no vision", and "no model of their environment" which again isn't simply true.
In the scheme of computer vision sophistication, its right down there with a fly's eye. That was my point, and I believe it was well understood. The pedantry of insisting it is computer vision full stop. Well, yes, about 1% of what computer vision could be. Niggling over that point is what's annoying. Instead of a good-faith discussion of how primitive this is, how tiny a feature it is, how miniscule the benefit it brings the consumer. How its pasted on the Roomba feature list because, hey, cameras are cheap and we can sell that feature for maybe $100 because the consumer doesn't know.