It's not just hiding the video. It was mainly the content of that video.
And then we had all the other leaks.
That they couldn't reliably distinguish children from adults, making them (even by their overly optimistic models) more dangerous to children than humans - which is why they started operating mostly at night, when there are few children.
And then it was reported they had 1.5 employees per car in operations, requiring support every 2-5 miles. Very surprising given that they've reported a disengagement every 95,000 miles last year.
Cruise was a company that could not withstand transparency.
> And then it was reported they had 1.5 employees per car in operations, requiring support every 2-5 miles. Very surprising given that they've reported a disengagement every 95,000 miles last year.
If true that alone should have disqualified them from being street legal.
"Support every 2-5 miles" = having a human on standby to deal with a situation that might soon require human intervention. According to Kyle, speaking as himself on HN, most of these support events don't result in the remote operator doing _anything_.
> most of these support events don't result in the remote operator doing _anything_
That's a utterly ridiculous statistic. If you need a human to monitor to prevent a bad result, it doesn't matter how often the human has to intervene.
For example: With Tesla Autopilot, the driver doesn't have to do anything the vast majority of the time. Still, your life expectancy will be measured in days if you don't monitor it.
Even if the vehicle makes completely random decisions when faced with a binary code, you could still say that 50% of the support events don't result in the remote operator doing anything.
Yeah, and unlike humans who double park, people lose their minds at an AV, hence why the company increased the number of operators.
They're also overstaffed to deal with surges - I recommend you go read the (ex) CEO's response to these claims. As the number of cars on the road goes up, the number of employees required to deal with surges increases more slowly. At some point the ratio increases from 1:20, which is where it's already at.
> Yeah, and unlike humans who double park, people lose their minds at an AV, hence why the company increased the number of operators.
Looks like they haven't increased the number of operators enough.
> They're also overstaffed to deal with surges
Are they? How are they going to handle the next bay area earthquake? Will they have enough operators for that, or are they going to block first responders all over the city?
Fortunately, this is a moot point, because it is doubtful whether they are going to survive.
> I recommend you go read the (ex) CEO's response to these claims. As the number of cars on the road goes up, the number of employees required to deal with surges increases more slowly. At some point the ratio increases from 1:20, which is where it's already at.
Nice theory. Note that the number of remote operators shocked everyone that was following Cruise. They were constantly asked about number of cars per operator, what exactly the duties of a remote operators were, etc. They always avoided these quetions, and for a good reason. It turns out this was far worse than anyone outside Cruise could have imagined.
Sam's push to commercialize and lockdown infrastructure is not in any way incompatible with the mission of OpenAI.
Just as no one is ready for marriage or kids until it happens, we aren't ready for AGI until it happens. It will never be safe enough.
Sam's strategy is to get it out as widely as possible, with incremental progress acclimating us to it. In exchange, OpenAI gets real data on safety over time.
The board is naive and idealistic. Ilya is a genius and his heart's in the right place, but unfortunately his brain isn't.
We know that transformers can generalize within the training set. We know that transformers can make connections between wildly different domains (at least when prompted).
Of course it can't generalize beyond training - why would it? But at the same time, there is probably huge amounts of value lurking INSIDE the training data that humans haven't unlocked yet.
> We know that transformers can generalize within the training set.
> Of course it can't generalize beyond training - why would it?
4 out of 5 people I discussed this subject didn't know, and even believe that current LLMs are bound within their training set. They claimed that LLMs could synthesize data beyond their training set, and the resulting answers will never be wrong.
There's a large misunderstanding about how these things work, and LLM developers do not spend the effort to fix this misunderstanding since it helps to raise the hype even further.
I supposed generalization outside the training set structure may occur by chance should the outside set share enough of the same 'structure'. Basically if you can find magical maps to your training set then perhaps generalization may occur.
remote operation of vehicles often makes a lot of sense economically, since you can effectively decouple drivers from vehicles/riders. As you pointed out, this means you can shift to deal with peak loads and all of that - great.
Given everything you know now, was it wise to push for expansion over improvements to safety and reliability of the vehicles? On one hand, there is certainly value in expanding a bit to uncover edge-cases sooner. On the other hand, I'm not convinced it was worth expanding before getting the business sorted out.
My guess is that given the relatively large fixed costs involve in operating an AV fleet, that it makes some sense to expand at least up to that sort of 'break even' point. Do we know what that point is? Put differently, is there some natural "stopping point" of expansion where Cruise could hit break-even on its fixed costs and then shift focus towards reliability?
The first thing that came to my mind after reading, “… makes a lot sense” was the latency overhead that’s incurred when RA is activated and associating it with drunk driving due to the increased response time.
Maybe the article answers the following, but don’t know since I haven’t read it yet.
- median, p95, p99 latencies for remote assistance
I think a lot of the confusion here is over what's meant by "RA". This isn't a remote driving situation. It's like Waymo, where the human can make suggestions that give the robot additional information about the environment.