Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I currently work full-time in the self-driving vehicle industry. I am part of a team that builds perception algorithms for autonomous navigation. I have been working exclusively with LiDAR systems for over 1.5 years.

Like a lot of folks here, my first question was: "How did the LiDAR not spot this?". I have been extremely interested in this and kept observing images and videos from Uber to understand what could be the issue.

To reliably sense a moving object is a challenging task. To understand/perceive that object (i.e., shape, size, classification, position estimate, etc.) is even more challenging. Take a look at this video (set the playback speed to 0.25): https://youtu.be/WCkkhlxYNwE?t=191

Observe the pedestrian on the sidewalk to the left. And keep a close eye on the laptop screen (held by the passenger on right) at the bottom right. Observe these two locations by moving back and forth +/- 3 seconds. You'll notice that the height of the pedestrian varies quite a bit.

This variation in pedestrian height and bounding box happens at different locations within the same video. For example, at 3:45 mark, the height of human on right wearing brown hoodie, keeps varying. At 2:04 mark, the bounding box estimate for pedestrian on right side appears to be unreliable. At 1:39 mark, the estimate for the blue (Chrysler?) car turning right jumps quite a bit.

This makes me believe that their perception software isn't as robust to handle the exact scenario in which the accident occurred in Tempe, AZ.

I think we'll know more technical details in the upcoming days/weeks. These are merely my observations.



Alright, so given your observations, which I don't doubt, here's a question I have: why have a pilot on public roads?

If uber's software wasn't robust, why "test in production" when production could kill people?


> If uber's software wasn't robust, why "test in production" when production could kill people?

Because it's cheap. And Arizona lawmakers apparently don't do their job of protecting their citizens against a reckless company that is doing the classic "privatize profits, socialize losses" move, with "profits" being the improvements to their so-called self-driving car technology and "losses" being random people endangered and killed during the process of alpha-testing and debugging their technology in this nice testbed we call "city", which conveniently comes complete with irrationally acting humans that you don't even have to pay anything for serving as actors in your life-threatening test scenarios.


Disclaimer: I am playing Devils Advocate and I don't necessarily subscribe to the following argument, but:

Surely it's a question of balancing against the long term benefit from widely adopted autonomous driving?

If self driving cars in their current state are at least close to as safe as human drivers, then you could argue that a short term small increase in casualty rate to help development rate is a reasonable cost. The earlier that proper autonomous driving is widely adopted, the better for overall safety.

More realistically, if we think that current autonomous driving prototypes are approximately as safe as the average human, then it's definitely worthwhile - same casualty rate as current drivers (i.e. no cost), with the promise of a much reduced rate in the future.

Surely "zero accidents" isn't the threshold here (although it should be the goal)? Surely "improvement on current level of safety" is the threshold?


You can make the argument with the long-term benefits. But you cannot make it without proper statistically sound evidence about the CURRENT safety of the system that you intend to test, for the simple reason that the other traffic participants you potentially endanger are not asked if they accept any additional risk that you intend to expose them to. So you really need to be very close to the risk that they're exposed to right now anyway, which is approximately one fatal accident every 80 million miles driven by humans, under ANY AND ALL environmental conditions that people are driving under. That number is statistically sound, and you need to put another number on the other side of the equation that is equally sound and on a similar level. This is currently impossible to do, for the simple fact that no self-driving car manufacturer is even close to having multiple hundreds of millions of miles traveled in self-driving mode in conditions that are close enough to real roads in real cities with real people. Purely digital simulations don't count. What can potentially count in my eyes is real miles with real cars in "stage" environments, such as a copy of a small city, with other traffic participants that deliberately subject the car to difficult situations, erratic actions, et cetera, of which all of them must be okay with their exposure to potentially high-risk situations.

Of course that is absurdly expensive. But it's not impossible, and it's the only acceptable way of developing this high-potential but also highly dangerous technology up to a safety level at which you can legitimately make the argument that you are NOT exposing the public to any kind of unacceptable additional risk when you take the super-convenient and cheap route of using the public infrastructure for your testing. If you can't deal with these costs, just get the fuck out of this market. I'm also incapable of entering the pharmaceuticals development market, because even if I knew how to mix a promising new drug, I would not have the financial resources to pay for the extensive animal and clinical testing procedures necessary to get this drug safe enough for selling it to real humans. Or can I also just make the argument of "hey, it's for the good of humanity, it'll save lives in the long run and I gave it to my guinea pig which didn't die immediately, so statistically it's totally safe!" when I am caught mixing the drug into the dishes of random guests of a restaurant?


It's an n of 1, but we're nowhere close to 'human driver' levels of safe.

Humans get 1 death per 100 million miles.

Waymo/Uber/Cruise have <10 million miles between them. So currently they're 10 times more deadly. While you obviously can't extrapolate like that, it's still damning.

If you consider just Uber, they have somewhere between 2 and 3 million miles, suggesting a 40x more deadly rate. I think it's fair to consider them separately as my intuition is that the other systems are much better, but this may be terribly misguided.

This is a huge deal.

I honestly never thought we'd see such an abject failure of such systems on such an easy task. I knew there would be edge cases and growing pains, but 'pedestrian crossing the empty road ahead' should be the very first thing these systems are capable of identifying. The bare minimum.

This crash is going to result in regulation, and that's going to slow development, but it's still going to be justified.


I have the same questions as well. But my best guess is that they probably have permission to drive at non-highway speeds at late nights/early mornings (which is when this accident occurred, at 10 PM).

My first reaction when I watched that video was that my Subaru with EyeSight+RADAR would have stopped/swerved. Even the news articles state something similar (from this article: https://www.forbes.com/sites/samabuelsamid/2018/03/21/uber-c...)

>The Volvo was travelling at 38 mph, a speed from which it should have been easily able to stop in no more than 60-70 feet. At least it should have been able to steer around Herzberg to the left without hitting her.

As far as why test this, I'm guessing peer pressure(?). Waymo is way ahead in this race and Uber probably doesn't wanna feel left out, maybe?

Once again, all of these are speculations. Let's see what NTSB says in the near future.


I live here and they drive around at all times of the day and don't seem to have any limitations. They've been extremely prevalent and increasing in frequency over the past year. In fact, it's unusual _not_ to see them on my morning commute.


> At least it should have been able to steer around Herzberg to the left without hitting her.

Does the car have immediate 360 degrees perception? A human would have to look in one or two rear view mirrors before steering around a bike, or possibly put himself and others in an even worse situation.


Sorry but that's just wrong behaviour IMO.

If you're about to hit a pedestrian and your only option is to swerve, then you swerve. What could you possibly see in the rear view mirror that would change your reaction from "I'm gonna try to swerve around that pedestrian" to "I'm gonna run that pedestrian over"? Another car? Then you're going to take your chance and will turn in front of that car! The chance that people will survive the resulting crash are way higher than the survival rate of a pedestrian being hit at highway speeds.


You should always be aware when driving of where your "exits" are. This is not hard to do. Especially at 38 MPH, you can be extremely confident there are no bikes to your left if you have not passed any in the past couple seconds. And, lanes are generally large enough in the US that you can swerve partway into one even if there are cars there.


If everybody is driving in the same speed on all lanes, which is not unlikely on that kind of road, I generally am not confident that I can swerve into another lane _and slam the brakes_ without being hit. If I am hit, the resulting impact speed with the bike could be even worse than if I just slammed the brakes, so I don't think it's really a given.

You also cannot decide in 1 second what would happen if the pedestrian were to freeze, and whether you'd end up hitting him/her even worse by swerving left.

Most people in that situation would just brake, I think.


Because Uber wanted that.

Other self-driving car companies (like Google (or whatever they renamed it)) have put a lot more work into their systems and done a much greater degree of due diligence in proving their systems are safe enough to drive on public roads. Uber has not, which is why they've been kicked out of several cities where they were trying to run tests. But Tempe and Arizona is practically a lawless wasteland in this regard and is willing to let Uber run amok on their roads in the hopes that it'll help out the city financially somehow.


I'm assuming LiDAR is not the only sensor installed in self-driving cars. Isn't that the case? And in this scenario, the software didn't have a lot to process. Road was empty, pedestrian was walking bike in hand perpendicular to road traffic...

Even if the detection box changed in size, it should have detected something. Tall or short, wide or narrow, static or moving... at least it should apply brakes to avoid collision.


I'm really surprised that we're even talking about the pedestrian's clothes or lighting or even the driver. Isn't the entire point of sensors like LiDAR to detect things human beings can't? The engineering is clearly off.


LIDAR works by shining a laser beam of certain wavelength. If some object completely absorbs that wavelength, there's no way LIDAR can see it.


Is it possible for car to do some calibration of some sort to decide what is current "sensor visibility"? Like a human would do in a fog. Is this a common practice to use this information to reduce or alter speed of the car?


Great question. At least in our algorithms we do this - to adjust the driving speed based on the conditions (e.g., visibility or perception capabilities).

At the end of the day, you can drive only as fast as your perception capabilities. A good example of that is how fast humans can perceive when influenced by drugs/alcohol/medications vs. when uninfluenced.

What is baffling is the fact that the car was driving at 38 mph in a 35 mph zone. This should not happen regardless of how well/poor your sensing/perception capabilities are.


Maybe the question isn't why the LIDAR didn't spot it. I feel it's more likely it did spot it, but couldn't make the correct decision.


You summed up all my speculations in one sentence




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: