the only reason we don't hear non-stop about destruction is that you're required to keep both hands on the wheel/yolk at ALL times while FSD is activated. comparing the current cameras on a Tesla to the human eye is just silly really, sure they cover 360 which is superior, but in almost any other way, the human eye is superior having better dynamic range, better contrast, better resolution and best of all, we have eye lids. if you would just sit down and think about it for 10 mins you'll realize that gimping your self driving car by using cameras only is just a very very silly proposition when we have all this cool tech like lidar essentially giving the car super powers.
If FSD drivers were having to constantly intervene because the car couldn’t accurately map obstacles, we’d be hearing a lot more about it. I drive with FSD all the time - I could give you a list of things it needs to improve on, but not a single one has anything to do with its accuracy of understanding its surroundings.
>not a single one has anything to do with its accuracy of understanding its surroundings.
This has been my gripe for a long time. I feel like many in tech have conflated two problems. With current software the problem of perception (ie "understanding its surroundings") is largely solved*, but this shouldn't be conflated with the much more difficult problem of self-driving.
*for sure, there have been issues with perception. A glaring example is the Uber fatality in AZ.
This exactly. Reading the comments and understanding the huge gap between perception and reality of FSD is eye opening. There are a lot of armchair experts here who wouldn’t be caught dead driving a Tesla but are so confident in their understanding of its strengths, weaknesses, and the underlying reasons.
I do see stories about FSD seemingly trying to drive into obstacles fairly often. It’s true that it does see most obstacles, but most is not good enough for this.
Accuracy of surroundings is absolutely something it could improve on. Adding a different modality (like lidar) would be like adding another sense. Seeing an 18 wheeler without under guards would be easier with an additional sense. It makes the intelligence part easier because the algorithm can be more sure about it's interpretation of the environment.
And no, a neural net and two cameras are not "just fine". The day cameras will be as good as your eyes and your neural net will be on the level of human intelligence (AGI) then maybe it would be possible. But until then you will need to rely on extra hardware to get there.
Go check on youtube how FSD behaves in city with 1/10th the complexity of SF/Waymo. And remember the difficulty is with the long tail of unexpected events.
We don't drive just fine, we routinely kill each other, often because of poor visibility or not noticing motion. Backing into a busy street? Bam. Open your door without checking? Biker down. Passing a bus at a crosswalk? Pedestrian dead. Driving at night with heavy fog? Off the cliff.
Even your basic non-fancy non-AI car these days have a variety of sonar/radar assists to help out their cameras. Tesla is just being cheap (and getting people killed because of it).