Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Ordinary neurotypical drivers act "unexpectedly" on the road all the time.

"all the time" means it is not unexpected. Humans do mistakes and omissions that you've subconsciously learned that humans do, so you can adjust to deal with that.

The difficulty with AI is that it behaves like a completely alien being, making errors that are alien and unpredictable to a human driver.



Each individual behavior is indeed unexpected. When I go through an intersection I don't expect that somebody blows through a red light and t-bones me. But that does happen.

Perhaps a self driving car's unexpected behavior is on the "too defensive" side while human drivers are on the "suddenly do something wildly dangerous" but this is easier to account for as a driver rather than more difficult to account for.


You don’t expect that someone will blow through a red light, but you expect that they could, and so you look both ways before you accelerate if you’re the first one waiting when the light turns green.

You wouldn’t expect someone to suddenly accelerate from a stop while they have a red light, so you don’t typically look both ways when driving through a light that has already been green for a while when there are cars on either side waiting.

I’m not saying Waymo cars do that; just an example of expected unexpected vs unexpected unexpected behavior.


Alien and unpredictable until we have enough experience with them to know their edge cases.

From my experience with LLMs, AIs even when called to be creative are fairly consistent, even if at first they seem to be producing output that’s more creative than human norms.


Because of the number of interfaces in the environment, there are a near infinite number of edge cases. The mail point is that because we evolved a theory of mind with humans, we can more reliably narrow that number of edge cases with human drivers without having to experience it first. Having an AV learn those edge cases bears some risk, and that risk may be carried by unwilling members of society.


Unwilling members of society are already carrying the risk of every nascent driver entering the road for the first time solo and every aging driver who hasn't yet had the accident that loses them their license.


That's the point of the risk part of the comment. Because we've evolved to have a theory-of-mind with other humans, we can at least know or intuit some of that risk. That's different than accepting a black-box risk. The whole point of the trust part of the comment is that, in order to build trust, we need to have risk-informed decisions.


> "all the time" means it is not unexpected. Humans do mistakes and omissions that you've subconsciously learned that humans do, so you can adjust to deal with that.

For example, just because someone has their turn signal on and is slowing down doesn't mean they're going to turn.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: