In 2018, road traffic deaths is at 37k in America.
If a company invented a self driving car that kills 1,000 people a year, it will never gets allowed on the street. Even 100 a year seems high. But it will actually save tens of thousands.
Why are we a lot stricter on self-driving car technology than on humans? Why can't we simply choose the option that saves more life?
When someone does harm with his/her car, holds the civil (and sometimes penal) responsability of his own acts and usually cannot do harm more than once in a very small timeframe.
In the case of autonomous IA, the company making/coding the vehicle software will be liable, and the problem is very likely to show again in a short time period.
That makes this kind of companies technically on the verge of bankrupcy because they are a good target for class-action lawsuits.