If FSD drivers were having to constantly intervene because the car couldn’t accurately map obstacles, we’d be hearing a lot more about it. I drive with FSD all the time - I could give you a list of things it needs to improve on, but not a single one has anything to do with its accuracy of understanding its surroundings.
>not a single one has anything to do with its accuracy of understanding its surroundings.
This has been my gripe for a long time. I feel like many in tech have conflated two problems. With current software the problem of perception (ie "understanding its surroundings") is largely solved*, but this shouldn't be conflated with the much more difficult problem of self-driving.
*for sure, there have been issues with perception. A glaring example is the Uber fatality in AZ.
This exactly. Reading the comments and understanding the huge gap between perception and reality of FSD is eye opening. There are a lot of armchair experts here who wouldn’t be caught dead driving a Tesla but are so confident in their understanding of its strengths, weaknesses, and the underlying reasons.
I do see stories about FSD seemingly trying to drive into obstacles fairly often. It’s true that it does see most obstacles, but most is not good enough for this.
Accuracy of surroundings is absolutely something it could improve on. Adding a different modality (like lidar) would be like adding another sense. Seeing an 18 wheeler without under guards would be easier with an additional sense. It makes the intelligence part easier because the algorithm can be more sure about it's interpretation of the environment.