Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Alarmed by Tesla’s self-driving test, state legislators demand answers from DMV (latimes.com)
90 points by turtlegrids on Dec 12, 2021 | hide | past | favorite | 156 comments



Reader view works too (on mobile)


Note that there are two “FSD Beta” programs:

1. The paid software unlock for “Full Self Driving” (currently in “beta”) available to everyone, which provides automatic driving from on-ramp to off-ramp, and a handful of parking features like auto-park and summon: https://www.tesla.com/support/autopilot#usingautopilot

2. The invite-only “FSD Beta” that unlocks the above FSD features on city streets, which is available to a subset of customers w/#1 who have very high Tesla Safety Scores.


I feel confused why 1) they would call both programs by the same name (arguing that an acronym is pretty much the same name) and 2) why the government even allows them to call it Full Self Driving, when there are levels of autonomous driving and I imagine most would assume Full means the highest level, which, Tesla has not yet achieved from everything I've read.


If you wanted to stall and mire an investigation in process then having two separate systems named almost exactly the same thing and constantly quibbling over semantics of them would be a good way to do it.


“Beta” classification for the former is a post facto excuse after a fatal crash(there were naturally several). The latter is actual unfinished software(called Beta but more of Alpha/PoC).


> I imagine most would assume Full means the highest level

USB 2.0 got away with Full Speed/High Speed (with high being more than full)


USB failures likely don't result in the serious injury or death of the user


Maybe with a USB "full speed" I assume it's the fastest that the current USB could go. For some reason, with self driving, the modifier "full" makes me think level 5 autonomy, which I'm almost certain it does not have.

Actually, NHTSA defines[0] level 5 as: "An automated driving system (ADS) on the vehicle can do all the driving in all circumstances. The human occupants are just passengers and need never be involved in driving."

[0]: https://www.nhtsa.gov/technology-innovation/automated-vehicl...


> Maybe with a USB "full speed" I assume it's the fastest that the current USB could go.

That's correct; "Full speed" refers to the higher speed profile for USB1.1. It's a historical artifact, at this point, and wasn't generally used for marketing purposes.


Those weren't marketing names; that is, no-one was being presented with devices labeled "USB Full Speed (meaning USB1.1 full speed)" and buying them on the basis that that must be better than "USB High Speed".


"I Can't Believe It's Not Self-Driving"


> ...cited the apparent poor performance of what Tesla calls Full Self-Driving beta, a $10,000 option that gives owners advanced automated driving features and allows them to test cutting-edge autonomous technologies on public roads. (Beta is a software term that describes a product not quite ready for general sale to the public.)

Wait, does this mean people are actually paying extra money to become test dummies for this?


Yes, Tesla owner here. When you buy you’re given the option to purchase “FSD” for $10k. All the caveats reveal that it’s not really FSD though, of course, and you’re essentially paying for these partial FSD features and are a part of the testing (like opting into iOS beta, for example).

You can go through the order flow on Teslas site and see this quickly.


People are paying to unlock advanced Autosteer features ($10k flat fee, or $199/mo) and a small subset of those are invited to the FSD Beta that enables these features on city streets in addition to highways

https://news.ycombinator.com/item?id=29531598


yes. and also giving a whole lot of driving data to tesla for use in their insurance program and obviously to train their AI.

i’m one of them AMA


What are the use cases and often how do you use it? (reposting to everyone who says they're open to questions)


I find highway driving more relaxing using FSD. It will request lane changes appropriately and then I can just approve instead of having to constantly wonder which lane I should be in to make the exit in time. I have to take over quite often but on the whole it’s definitely made driving more enjoyable. So I use it almost every time on the highway. I bought it when it was 7k and at least half of my motivation was to see how it progressed (since I am in the AI field myself), which it has significantly since then.


What are the use cases and often how do you use it? (reposting to everyone who says they're open to questions)


yes, turns out "all our vehicle have the necessary hardware for full self driving" was another of the many boutades that aren't technically lies only because layers of fine print included and generous interpretations thereof from the Musk sycophants.


Edited to remove snarky reply of:

“No, you should do some basic research first.”

The FSD beta is no additional cost over the (currently) $10,000 package that adds additional AP capabilities and hardware upgrades. My computer was already upgraded for example.



> “No, you should do some basic research first.”

Why should I? I read the article and can ask/talk about it here--discussion is literally the point.


Actual user of the beta here. Happy to answer any questions that are honest. My opinion is let the technology develop in the open, warts and all.


As someone who shares a road with Teslas, it feels like I have been enrolled in this beta without my consent, and without any tangible benefits to me.

What are the arguments for why non-owners should support this approach?


There are several tangible benefits if you drive near a lot of Teslas on autopilot in a highway setting. Besides the advantages of adaptive cruise control (constant speed and set follow distance) it is adding safe lane changes where it always has visbility into blind spots and always activates the blinkers.

This beta adds the same benefits and more on city streets.


As someone who shares (reality) with (people), it feels like I have been enrolled in this beta (consisting of taking photos, video recording, coughing, driving) without my consent, and without any tangible benefits to me.

What are the arguments for why non-owners should support this approach?


I actually have no idea what point you are trying to make, assuming there was one.


I have a dash cam in my Yaris. It's not any different than a Tesla recording video around you.


Help me understand how you think your opinion entitles you to gamble with other people's lives.


Because I don’t view it that way as an actual user of the system. I know I am still in full control of the vehicle as is any driver using it. Merely tapping the brakes is hardwired to return control to the driver. The system also has many controls to montior driver attentivness and alert or disable itself if the driver is not attentive. Recently folks have been kicked out of the beta program for lots of reasons so they are monitoring this closely.


I see the argument, but I don't think driver control/supervision is actually the crux of the issue. If Firestone had a completely new tire technology that was mostly ok but had an elevated risk of spontaneously blowing out, I would not be happy if they were doing their beta testing in public roads regardless of who was driving.

The technology being tested seems half baked and not yet ready for testing on a public roads. At the very least, their approach to software updates does not seem like its being sufficiently tested before deployment. This video[1] linked in another thread is an example of what I mean.

[1] https://twitter.com/mostlyharmlessz/status/14694217625553510...


Its not clear to me that a general purpose autonomous car can be developed in a vaccum as you mention. Tesla has all the resources in the world to test with their own drivers on closed loops and using simulations however they choose to go this route and open themselves up to significant liability. Given the pace of development of other autonomous cars I think this may be the only viable approach.


What? How many people do you think we should sacrifice on the altar of technological progress? And why wouldn't they get a say in that?


Agree, but I think this boils down to a disagreement over at what point a FSD beta is ready for public testing.


Just because you can make steering, braking, and acceleration changes to the car does not mean you are “in control”. The software makes such aggressive decisions that it can become impossible to react, causing massive additional danger to others, such as was the case in the recent crash.

https://twitter.com/mostlyharmlessz/status/14694217625553510...


I don’t know the specifics of that incident but it should be exceedingly rare. I would like to know more about it and will try to find out more.


The system also has many controls to montior driver attentivness and alert or disable itself if the driver is not attentive.

Sorry, but they same system lets the driver play video games while driving down the road[1] which is kinda the opposite of what you claim.

1 - https://www.nytimes.com/2021/12/07/business/tesla-video-game...


Would you trust it to drive your mother or a loved one across the country with zero assistance or interventions?


It is not possible in the current state so the point is moot. Seeing how it works first hand I am a firm believer in the vision only approach. I do think one day this will be possible with Vision (cameras) only.


Why do you say that? Did you have a first hand experience with not-only-vision approach?


Doesn't "warts and all" for developing self-driving tech mean people dying?


It is possible and it has already happened with the Uber incident (even with that car having a saftey driver).


The ultimate fake it 'til you make it.


Having seen many hours of FSD beta on Youtube, with the most recent updates, I am convinced the FSD driving is safer on the highway than humans driving. I bet the stats for accidents per mile driven are much lower for FSD than humans. It does stumble, very gracefully, in really unusual circumstances when driving in city streets. But it's so so close to covering these outlier situations.

If I could choose to have only FSD teslas around me on the highway, or only human drivers, I'd pick the FSD every time.


This is pretty much my experience. A lot of people testing it are in city centers with lots of complex things going on. For the most part in suburbs or smaller cities the driving is a lot more straightforward and it is already quite good there.


My delivery isn’t until June but I’ve held off on buying FSD (maybe I’ll enable it for the month when I do a long trip), but I’m worried about this fantom braking issue I’ve heard about.


You will have basic autopilot which can also phantom brake. It is not as bad as it seems and was possibly worse when they used radar. As long as you are attentive to the system it is not a big deal.


But why would I bother to buy an autonomous driver if I must be attentive to the system? I can just drive it myself then.


I would get an older one. Newer ones lack the expensive radar component. Camera-only can be disabled by kids with a Q-beam (bright flashlight).


On the contrary phantom braking for me was almost always caused by the radars and especially with overpasses in front of you.


Does it work?


It does work quite well already and each release is getting better. I’ve done several city drives with zero interventions. As long as you are attentive it is very safe.


Slightly off topic, but it's fun to see company culture reflected in the algorithm behaviour.

There are thousands of clips of Tesla's Fatal Self Driving being far too aggressive for American roads (https://nitter.net/taylorogan/status/1469404579439824899).

Meanwhile, companies like AutoX have the problem of being far too conservative for Asian roads (https://www.youtube.com/watch?app=desktop&v=TFEvkmvIjVo). AutoX has some of the best edge case handling I've seen.


This I've long thought the be the Achilles' heel of autonomous. Even skilled human drivers have a hard time calibrating their style to the local situation.

I remember the first time I drove as a passenger through Manhattan with my father driving, and couldn't get my head around his change in driving style. Was a side of him I'd never seen before - driving aggressively like a cab driver. I'm sure it came from his years of living in NYC.


For sure. Driving on a test track is a technical problem. Driving in traffic is a social problem. As you noted, how one drives in traffic is not just about getting somewhere, but about communicating and negotiating with other drivers around you.

It's one of the things that makes me think that true self-driving may be in the AGI bucket.


If all the cars would be autonomous then this problem would be solved because they would all keep the same driving style right?


Not at all. You really think somebody wouldn't program their cars to be more aggressive so that their owners get there faster?


I think there will have to be a NYC mode


Learning the appropriate driving style for a neighborhood / city seems like one of the easier AI problems for Tesla to solve.


I'm a big fan of Tesla but I'm fairly certain that, at some point, they're going to voluntarily refund everyone the money they paid for self-driving. And, at some later point, they will probably make self-driving work but on a future generation of HW/SW.

Genuine question: if this happens, will it kill Tesla to refund all of this money? I don't pay close enough attention to their financials to know myself. I assume (and hope) not.


Software isn’t really a problem. What hardware changes do you think Tesla will need to make to deliver on FSD, whether that ends up being actual level 5 or not?

it is also not that expensive to retrofit most new hardware, especially if it is just new silicon.


They have already done a hardware upgrade to my car from HW2.5->3, the nivida based solution to their own custom silicon. The $10,000 (now, $5000 when I bought it) includes all hardware upgrades.


I have seen nothing in Elon Musk's public persona to suggest he would ever voluntarily refund FSD beta fees. He might "voluntarily" do so under enough government pressure but he seems more likely to thumb his nose at regulators and take his chances on losing a class-action suit.


This is why I drive a big truck. When egomaniacs can trick the government into allowing them to set their half baked creations loose on public roads using unknowing and unwilling third parties as test subjects, I want to be the dominant term in any and all conservation of momentum equations that may ensue.

As an aside, I’ve driven maybe 50,000 miles in my lifetime. Of those, 47,000+ are using pre-2005 vehicles with zero “advanced safety features”. However, I can recall two times in my driving career where I felt collision was imminent, and both of them occurred in the <5% of the time I was driving with “advanced safety features”. Yes, the computer was able to step in and avoid a collision, but it strikes me as odd that the only times I’ve needed it are when it’s available. In my opinion these new cars lull me into a false sense of security where I do not give appropriate attention to road conditions. Has anyone noticed the same? I fear that as the technology gets better and better people will pay less and less attention, putting more and more lives at risk when things inevitably go wrong.


Interesting game equilibrium. You buy a big truck because you're threatened by lightweight Teslas. I buy a semitruck to protect myself from you. A few cycles later, everyone's screaming around like Snowpiercer in APCs and military bulldozers.


As much as I’d like to roll down I-5 in an M1 Abrams I think the economics of pulling a shitload of metal out of the earth and moving it around really fast will present significant damping to the system you describe.

As for the “lightweight Teslas”, I looked it up and in fact S and X’s are heavier than my Big Truck. Though 3’s are indeed lighter.


What does a big truck have to do with safety features? To me, it just sounds like you’re happy to pollute the environment just so you’re sure to annihilate your counter party in an accident.


Many studies have shown (and basic physics suggests) that the survivabity of accidents goes way up with vehicle weight all other things being mostly equal.

https://www.iihs.org/topics/vehicle-size-and-weight


As I said, momentum. Are you familiar with newton's laws?


What's funny about this is the safety ratings for Teslas are much better than any big trucks.


Teslas are also heavier than many trucks (including mine) so this isn't surprising. That doesn't mean I want something even lighter.


If I had the money to buy an expensive car, I would want to drive it myself. If I could buy an airplane, I would want to fly it myself, too.

I don’t understand rich guys who fork over for what is essentially a baby carriage pushed by a robot nanny. Where is your self-respect?


So you can't imagine what people would prefer over driving expensive cars? :D


No. And I am not sure what substance you abused that caused you to misinterpret me in that way. Perhaps caffeine? Let’s go with caffeine.

I once owned a sports car. I bought it because I enjoy driving. I am saying that I, as a red blooded American man, do not understand why other men who are in the market for an expensive car would choose to purchase what is functionally a nannyless pram.


The only way you couldn't understand why people would prefer not driving a sporty car they can afford is for you to be unable to imagine a better way to spend time in a sporty car that doesn't involve driving it.

As you can see with me understanding exactly what you said, you tend to jump to conclusions while not understanding things. Maybe that explains the inability to grasp "rich people" motives.


When a 16-year old kids starts learning to drive on the city streets, is anybody alarmed? I doubt that is much safer.

We should treat the AIs just as children at this point. They are simply learning.


Along this vein, let’s have some sort of visual indicator on the car when it’s self driving, maybe a green light on the front/back the same way that we flag learners. That way other drivers can be extra cautious when in the vicinity.

I’m not sure this would have helped the Tesla that steered into oncoming traffic. Luckily for the other driver the Tesla driver reacted and crashed off on the other direction. If he hadn’t done that I don’t think the other driver would have had the time to react even if he was aware of the danger the Tesla posed.


I really thought the breaks would have been slammed on Tesla's shipping of autonomous car software after the incident where the guy was decapitated by a tractor trailer.


The question is whether overall the technology makes Tesla drivers more safe. They’ve shown data that drivers with the autopilot features enabled are safer than those who do not have it on.

While you can dispute the data, it’s not as simple as “car makes one type of mistake”, “feature should be banned”.


At that time the system was merely lane keep with adaptive cruise control. Any system would have had a similar result at that time.


Not any system. My recollection of that specific event was that is was due to the bright white tractor trailer not being picked up by the Tesla's camera. A system built with an orthogonal sensor could have picked up the trailer despite the camera missing it (e.g. a LIDAR). Proper fusion of those streams would have avoided the issue, and that person would be alive, or at least not dead because of Tesla's poorly engineered system.


To clarify, any adaptive cruise system in deployment at that time.


should have had emergency auto-brake. As I understand it the radar either didn’t see the trailer or filtered it out as stationary - camera was unable to distinguish white steel from white sky - should have been a wake up call that current tech doesn’t have the sensory bandwidth to know if something is blocking the road.


As they should, from all agencies involved. It is absolutely horrifying to see Tesla do this open beta test on public roads when no other non-Tesla entities (other cars, pedestrians, motorcyclists, bicyclists, etc) consented to being a part of a reckless experiment.

Moreover, Tesla influencers on social media tend to cherry pick and show only the nicer parts of their FSD Beta drives where no incidents and disengagements occur. Whereas you will often see in the comments some Beta users disagreeing and often showing the scary bits. And of course the pattern of Tesla influencers acting like an echo chamber absolutely disagreeing with anything critical of Tesla, jumping to Musk's defense at every turn because..."My mission!". An example:

https://mobile.twitter.com/taylorogan/status/146940457943982...

A thread of the scariness of this Beta test on public roads:

https://twitter.com/Tweet_Removed/status/1438553523369615364...

Elon Musk has a history of overestimating Tesla's capabilities when it comes to autonomy. And also flouting regulations to reach the end goal. He wants to make people believe in the illusion that Tesla are in some unsurmountable lead when it comes to autonomy but they are actually playing catchup and are dead last because of the approach they took.

Here's a timeline of Musk dangling the carrot of full autonomy (SAE Level 5) for over 6 years now.

https://www.reddit.com/r/SelfDrivingCars/comments/n6nsmt/elo...

But people argue, "Oh, but they are getting better and eventually will iron out all the edge cases!". This is not a matter of just making some settings change and things magically fix themselves. These are safety critical systems and not considering edge cases while advertising the feature as "Full Self-Driving" can lead to fatalities. The edge cases don't seem to be finite and no amount of "Dojo will solve this!", "The system is learning more and more and will reach there!" will solve the problem with their approach. At some point, he will have to eat humble pie and incorporate more sensors to increase safety. But with the ongoing chip shortage and the fact that they have to sell as many cars as possible lest the stock promotion crumbles makes it an even harder task.

It's always some next version that will blow your mind. It's always the next version that's almost there.

https://twitter.com/elonmusk/status/1435967157662150675?s=20

https://twitter.com/elonmusk/status/1469541536128020485?s=20


It’s not just Tesla influencers vehemently defending FSD, but videos showing dangerous FSD behavior are also taken down very quickly using DMCA. It seems like there is a concerted effort to suppress “bad videos” that breaks the self driving narrative.

Latest example: https://twitter.com/mostlyharmlessz/status/14694217625553510...


The DMCA take down was by the original recorder of the video towards the person that re-uploaded it on Youtube without permission. Tesla had nothing to do with it.


I didn’t say Tesla had anything to do with it and I don’t know who is prompting them. What’s clear is that DMCA takedowns are now a pattern for “bad” FSD videos.


There is no pattern


The Tesla influencers — the explicit and implicit ones — have really soured me on the brand. I have driven in the original Tesla, loved the model S, owned a model Z, and had a cybertruck on order for a while (and intended to follow through with the purchase). I am a big Tesla fan, but the constant stream from the influencers rings like the “no matter what, this event is good for crypto” people. No matter what, Tesla acolytes jump in and only show the good things and rail against anything bad. I have cancelled my cybertruck order partially because of this behavior, Elons behavior, and in no small part because I saw a Rivian on the road the other day!


You’re saying that Tesla influencers pick and choose only the best videos on social media. This whole article is about videos of poor self-driving of Tesla cars, and you post a video of bad self-driving. The videos I’ve seen show some good and some bad automated manoeuvres.

Even if what you’re saying is right about influencers picking and choosing the best videos, it’s clear that there are videos of poor behaviour as well.

With so many videos out there I think it’s pretty clear what you’re getting with FSD regardless of the name. It is glitchy and unreliable and can get random safety issues with software regressions.

At the same time when it works it works pretty well, and it does also seem to be improving rapidly based on the videos that I’ve seen. A long way to go but it’s getting better.

It’s also not clear they’re dead last - they may be but there’s lots of people using it now and showing videos, which is harder to see for other companies.

You might have good points here but I don’t think it’s as black and white as you make out regarding how effective Tesla’s approach is with respect to safety, degree of progress, feasibility with current sensors, feasibility in general, misleading market, and general motivations and morals.


> https://mobile.twitter.com/taylorogan/status/146940457943982...

Damn, the part where he tried to take over and the car wouldn't let him for a while, was downright scary

Reminds me of a famous quote...

> "I'm sorry, Dave. I'm afraid I can't do that."

I fell like there should be a hardware way of taking over that overwrites anything the software does.


There is. Tapping the brake pedal.


I believe Tesla testing FSD on open roads is probably going to cost lives. I think they're aware of the game they're playing and they're taking more risks than is responsible.

I also believe the driver in that video doesn't have the level-headedness to be testing anything anywhere. Which is a criticism both of the driver and of Tesla for letting people like that run the software on the open road.


I think it is a calculated risk to get to autonomy faster and thus save lives.


Yes, I do think that's what their game is. Which I'm not inherently opposed to. But I think that at present they're taking more risk than I'm comfortable with having on our public streets.


Sure, there is a risk pedestrians and people in other cars will die. However, Elon is brave enough to take that risk!


Exactly. If you press the brakes the system gives complete control back to the driver. It's interesting to see all of the comments about the "horrifying" experiments Tesla is doing, and quoting science fiction movies about how robots are going to murder you. So much sensationalism.


The FUD is unreal.


I’ve read on r/selfdriving that he did take over, the “take over” alert just lingers for a few seconds.


> https://mobile.twitter.com/taylorogan/status/146940457943982...

> He wants to make people believe in the illusion that Tesla are in some unsurmountable lead when it comes to autonomy but they are actually playing catchup and are dead last because of the approach they took.

I'm sorry, but yeah, the results in that video are worse than what we had 14 years ago after the DARPA Urban Challenge. I'm actually astounded how bad it is. An insurmountable lead this is not, and points to how much of an advantage LIDAR is.

Also, while we all have our robot failure blooper reels, this is a system that's live and in production, and Tesla should be ashamed to release this even under the "Beta" label.


Correct me if I am wrong but the approach used by most in the DARPA challenge was lidar using pre-mapped routes and heavy use of localization. If the route changed significantly I think those cars would have trouble.

I belive Teslas approach might currently feel behind but has much more “headroom”. As Elon states the team have gone down many paths that lead to a local maximum which is short of the actual end goal. They firmly belive that an end-to-end system has to be based on vision and has to be able to drive in any situation the very first time without localizing itself to a preset path.


https://m.youtube.com/watch?v=0wJAANgG-Vg You mean this? It looks much less capable and a much more constrained environment than Tesla deals with.


MIT was in a collision and probably should have been disqualified, so not the best example. Look up Boss from CMU.

Moreover, what I said was after the DUC. The results you see at the DUC were the culmination of a little over a year’s worth of work. They were at best prototypes constructed under great pressure in a competition setting. A lot of what you see there was held together by duct tape and 80/20, and the code was not something you’d want to bet your life on.

But things did improve rapidly soon after, as results were disseminated, the community began to understand what worked and what didn’t in designing these systems.


The problem is that self-driving is almost entirely about edge-cases. And machine learning can optimistically only reach about 99% accuracy, so that last 1% of edge cases is where the real problem lies.


Yeah, but the issues the Tesla seems to have in the video above (pedestrians crossing, rail tracks, bus lanes, parked UPS trucks) are hardly 1% edge cases, but common occurrences that are in every city.

And it wasn't even rush hour through some crowded European city like Rome with people on scooters zooming in every directions, in poor weather conditions at night, it was broad daylight in a city with barely any traffic.

If it can't handle basic city driving properly, then I'm sorry.


Agree with you, but that area in downtown San Jose is very weird to drive around given their light rail system.

Not defending Tesla for all of those mistakes, but as a human, it’s easy to end up going down the wrong lane.


I've personally seen cars in those San Jose train lanes enough times to wonder just how common it is.


I think there's disagreement about what 1% means - far less than 1% of my time spent operating a car involves interaction with a UPS truck, but more than 1% of my trips do.

Asking as someone with no first hand experience with these systems - do they complete 99% of trips with no disengagements or unexpected behavior, 99% of time driving, something else?


I'd be curious to know what definition of edge-case you are using. I think self-driving can't be entirely about edge-cases by definition (of edge-case).


Sure 99% of driving is pointing the car straight, keeping it between the lines and avoiding the car in front of you. That's easy and has been essentially solved by dozens of car makers.

The "edge" cases are pretty much everything else. There are a near infinite number of possibilities here and it is clear that no one is anywhere near close enough to covering them all. Stopping for a pedestrian when turning right is not some completely unforeseeable event.


I get it. However what matters the most in my view is the long term ratio of serious accidents in self-driving vs human driving. There is no need to cover all possibilities for that, right? The only problem with Tesla is IMHO that the name FSD is misleading.


> long term ratio of serious accidents in self-driving vs human driving

Human driver as in: average driver, or best driver?


you’re talking about the definition of a single edge case. you can argue about the exact proportion, but it’s entirely within the realm of possibility that real world conditions are dominated by an amalgamation of many different edge cases that together add up to many percentage points.


Thank you for providing the video, the situation is appalling.


There is the argument that they are already safer than human drivers though and it'd be morally objectionable to stop. Not saying I'm completely on that side, but it is a valid utilitarian argument.


Have you watched some of these videos in question? These videos of autonomous city driving are so bad that this claim seems absolutely laughable to me.

The burden of proof is on the person making the claim of safety. For Autopilot, the argument was made comparing known AP deaths per total AP miles vs auto deaths vs. driving miles of the entire US fleet... when the former almost certainly didn't include a share of the deaths and was driven under less demanding conditions (highway miles).


This argument seems not fully thought through: First, "already" makes it appear that this property is monotonically increasing, i.e., that they will stay safer from now on, even if more people use this feature. From what we see in the video, it seems this cannot be concluded: If this technology is used in more vehicles, it would appear that also the rate of accidents may increase, possibly rendering it less safe in total. We do not know how many accidents were avoided because other drivers prevented them. Second, the types of accidents will likely also change, in a way that in total could make the technology less safe. Third, what if the drivers are not fully alert all the time, as they appear in this video, to quickly take over when the technology fails in such colossal ways as we see in the video, does it still remain safer?


The detractors don't disagree with the argument, but instead opine that current evidence does not support the premise.


That argument is bullshit built on aggregate Tesla Autopilot miles driven, which are largely cherry-picked ideal condition miles vs. human miles driven in all conditions.


If this argument holds consistently with every subset of cases, possibly. But for certain edge cases, humans still consistently outperform Tesla fsd (white trucks against the sun, for example) [citation needed], which means in certain scenarios that cannot be foreseen, the risk to the non consenting public is higher than with the average driver.

Ergo there's a predictable and preventable source of accidents that must be suppressed.


Distracted human drivers are terrible. People, in general, are easily and eagerly distracted. Otherwise the distance per accident driven would be much much higher. https://insideevs.com/news/542336/tesla-miles-driven-between...


There’s no such argument, but a dirty extrapolation and implication. They argue:

a) [Broadly speaking], cars [from traditional big autos] with ADAS features are objectively safer, and,

b) Tesla Autopilot is [in traditional sense] an ADAS system, which,

c) [implies this set of facts alone make Tesla cars and implementations statistically safer] is cool, huh? Would be silly if you didn’t agree with me.

I don’t call it clever.


They are safer than some human drivers in some conditions. They are not safer than every human driver in every condition though, so you'd need more to make a valid utilitarian argument.


The utilitarian argument doesn't require that they be better than every human driver in every condition -- just that they be better enough against the average situation that outcomes are net better.


I don't think that's right, because human drivers are exceedingly good at handling the average situation. If they weren't, you'd have accidents literally all the time. The average situation is accident-free. How much can you improve on that?

The utilitarian argument for driverless cars needs to be that driverless cars are better at handling the edge cases than humans, because that's when accidents happen.


I think you’re confusing “better” and merely “good,” as well as “average” and “mode.”

The average situation is maybe 0.99 accident-free (made up number for illustration). You improve on that by being 0.995 accident-free, which is better.

I agree that “safer than some human drivers in some conditions” is insufficient.


What I'm saying is that you can't design a car to handle the average driving situation and expect to really put a dent in the accident rate, because accidents are highly correlated to particular situations. For instance, if all of a town's accidents happen in a highly foggy area, but your driverless car cannot handle the fog because it's been designed to handle the average situation (not fog), then how will it reduce accidents? I would think that to reduce accidents it would have to be designed to work in the exceptional case (fog).

Maybe I'm confusing the idea of average and mode, so if I am an example of what you have in mind for an average situation would help.


The hard part is in the city. It is probably already a lot safer on the highways.


As someone who is actively participating in the beta you couldn’t be more off base in your assertions. It sucks to see progress stifled by people that have no idea what they are talking about. With the thousands of people actively using the FSD Beta safely, show me significant evidence of an increased level of accidents that have occured. You can’t because the evidence does not exist.


Tesla doesn't publish data nor shares it with the NHTSA, so that's very difficult to do. The only """data""" we have are videos uploaded to YouTube by a select group of Tesla owners who have access to the beta. Even ignoring the bias in videos selected for upload by the beta testers, every single city driving clip I've seen has several illegal maneuvers and many more disengagements.


That's more an indication of the clips you select to watch. I've seen half hour videos with no problems whatsoever.

We can't really draw any conclusions from self-selected video clips.

I do think the data should be openly available for all to study an analyze. If that helps competitors, then that's the price for testing in public.


There are thousands of people using the beta safely. Try actually delving into the numerous communities online discussing the beta rather than being swayed by influencers.


"the evidence does not exist" because it's not true or because Tesla is hiding it? We should know which, and these legislators asking questions seems to be a good start towards getting answers.


How could Tesla hide something that happens on public roads?

If a tesla is at-fault, there’s another non-tesla party in the accident and police and dmv usually involved.


They can hide it by discouraging anyone from having enough answers to draw conclusions. What's one cop going to do? It'll get written up sloppily in one report, _maybe_ if they think it's important, and that'll be it. There usually won't be enough details of any sort.


Because its not true at all. Most major Tesla incidents of any kind make headlines from fires to accidents on AP. You should do some basic research on the topic.

https://www.theverge.com/platform/amp/2021/10/21/22738834/te...

The FUD is _insane_ with Tesla.


If it's not true, Tesla should be overjoyed to share raw data (not processed through their marketing department) then.


I don’t think they even really have a traditional marketing department. The beta release notes are probably something you should check out.

Latest:

Improved object detection network architecture for non-VRUs (e.g. cars, trucks, buses). 7% higher recall. 16% lower depth error, and 21% lower velocity error for crossing vehicles.

New visibility network with 18.5% less mean relative error.

New general static object network with 17% precision improvements in high curvature and nighttime cases.

Improved stopping position at unprotected left turns while yielding to oncoming objects, using object predictions beyond the crossing point.

Allow more room for longitudinal alignment during merges by incorporating modeling of merge region end.

Improved comfort when offsetting for objects that are cutting out of your lane.


They _absolutely_ have a marketing department, PR whatever you want to call it.

Those numbers don't look useful for judging safety. How would one compare those against anything else?


Percentages are quite useless without the absolute numbers as a reference...


So why doesn't Tesla share raw data with the public, or regulators? I cannot think of better PR for Tesla, and autonomous driving in general.


They do and it is easily googled.

https://www.tesla.com/VehicleSafetyReport


That's only Autopilot and only gives one top line number per quarter, and comes after so much marketing fluff that it's pretty clear what department is generating it.


Do your own research! :-D


Yes, let's wait for that evidence to manifest itself instead! Smashing idea!


[flagged]


"This is the same woman that stated "fuck elon musk" on twitter,"

You are wrong.

Lena Gonzalez (D-Long Beach) != Lorena Gonzalez (D-San Diego)


Good catch. Tricky because Lena Gonzalez (D-Long Beach) || Lorena Gonzalez (D-San Diego) == Lena Gonzalez (D-)


No wonder Tesla is leaving California, over and over again you see corruption and government control taking over inovation.


Asking why the DMV is allowing beta-quality autonomous driving systems on public roads is “corruption”? Lol.


Who determines Beta-Quality ? What you complain here is something all developed countries already allow, anyways .. let the lobbyist bribe whoever they want.


Tesla does. The program is literally called “Full Self Drive (FSD) Beta”!


Shouldn't the government ask why a "beta" software that's being sold as "full self driving" is operating with seemingly no oversight?


while using it you are required to be fully in attention, keep your hands on the steering wheel and ready to take control at any time. What is not clear about that?


What's not clear is if that's good enough. I doubt it is.


I'm not willing to die for someone's "inovation."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: