> I'm not sure a machine would have noticed an odd lump next to a lamppost, with a civic rubbish bin next to it, in the rain.
As opposed to what? You don’t have RADAR, or LIDAR, or echolocation, or any other way to “detect” a person in the road. You have human vision.
This is the idea behind Tesla FSD. If human vision is good enough, computer vision is good enough. We have cameras that can approximate human vision. All the other companies using sensors are doing so as a crutch, because they didn’t have the chops to solve the vision problem.
I really don't need LIDAR. I have roughly 3 billion years of evolution and the end result - eyes, brain etc.
LIDAR is simply a sensor and a very inferior one, in general, when compared to my visual system. For starters it needs something to interpret it. My eyes have a 52 year old brain stuffed with experience behind them and it isn't always a hindrance!
I can reason about other people's driving style and react accordingly. I can also describe my actions and reasoning on internet forums.
I really do not want an inferior sensor such as LIDAR inflicted on me, nor would I want it to be the sole source of information for something that conveys me.
Quite - I don't have those sensors but I do have madly myopic (sort of corrected) stereoscopic vision and a brain, with 30 odd years experience in it. Oh and I have ears. I generally drive with the window down in town. I can look left and listen right.
My office has an entrance onto the A37 in Yeovil (1) Bear in mind we drive on the left in the UK. Picture yourself in a car next to that for sale sign and trying to turn right. That white car will be doing 20-40 mph or much more. In the distance is a roundabout (Fiveways) which is large enough to enable a fast exit and people love to accelerate out of a roundabout. As you can also see this is a hill so the other side is quite fast because cars have to brake to keep down to the speed limit of 30 mph. That's just one scenario.
Anyway, back to your idea that Tesla FSD (Full Self Driving) ie RADAR, LIDAR etc is equivalent to "me" is debatable. I do have my limitations but I can reason about them and I can reason about what FSD might mean as well.
You assert: "If human vision is good enough, computer vision is good enough." People and computers/cars/whatevs do not perceive things in the same way. I doubt very much that you have two cameras with a very narrow but high res central field of view with a rather shag peripheral view which is tuned for movement. Your analogue "vision" sensors should be mounted on something that can move around (within the confines of a seatbelt). Yes I do have to duck under the rear view mirror and peer around the A pillar etc.
I have no doubt that you have something like a camera with a slack handful of Google Corals, trying to make sense of what is happening but it is really, really complicated. I actually think that your best bet is not to try to replicate my or your sensors and actions but to think outside the box.
As opposed to what? You don’t have RADAR, or LIDAR, or echolocation, or any other way to “detect” a person in the road. You have human vision.
This is the idea behind Tesla FSD. If human vision is good enough, computer vision is good enough. We have cameras that can approximate human vision. All the other companies using sensors are doing so as a crutch, because they didn’t have the chops to solve the vision problem.