We'll know Tesla is serious about robotaxis when it starts hiring remote operators.
See full article...
See full article...
Remote intervention only works for things like post-crash moving the vehicle out of the way, or where the vehicle is stuck in some situation that it has no reference for and doesn't "know" what to do. It's not a way to avoid crashes and intervene - there's not enough time for that - even a human driver behind the wheel likely doesn't have enough time to intervene most times.From an economics perspective this makes perfect sense: have the low-effort "mundane driving work" performed by computers. At the point where there's an indication of a risky situation (a low-single-digit percentage), either switch to a remote driver or have them supervise and intervene as necessary. Even if your labor cost is twice as expensive because folks need to be computer savvy, and there's significant fixed cost in terms of mandatory wireless and additional hardware, you still win compared to missing the edge case/causing a crash ... and you get more training data in the process.
I don’t see how Tesla’s “camera only” approach ever works. They need more sensors which Musk forced them to take out.
And “robotaxis” is not some liquid gold industry! It is a pretty niche business that “doesn’t scale” because the taxi business just isn’t that big.
I don't think it's crazy to expect that new technology will sometimes require first responders to get training. For example, when EVs were new, firefighters needed training on how battery fires differed from the gasoline-powered fires caused by ICE vehicles. Here, police officers need to know how you communicate with a vehicle that doesn't have a driver. Obviously, these companies should work hard to minimize the demands on first responders, but it's impossible to design an AV that works exactly like a human-driven car from the perspective of law enforcement."This makes sense for other reasons, too. It would give Tesla time to introduce itself to local officials and offer training to local police and fire departments."
I know this isn't a quote, and it's Timothy's words - but, imagine the chutzpah needed to expect local law enforcement and first responders to adjust to your private company's playthings deployed on public roads (before they're ready), and not the other way around (ie. your vehicles need to adjust to them).
It depends how the self-driving system is designed. Waymo's approach is that any time the vehicle thinks there's a risk of a crash it will preemptively come to a stop and wait for remote guidance. If it's sufficiently good at predicting when a crash might happen (which is obviously not trivial) this should be pretty effective at preventing crashes.Remote intervention only works for things like post-crash moving the vehicle out of the way, or where the vehicle is stuck in some situation that it has no reference for and doesn't "know" what to do. It's not a way to avoid crashes and intervene - there's not enough time for that - even a human driver behind the wheel likely doesn't have enough time to intervene most times.
So, instead of these companies all jumping in feet first, perhaps there should be a standard developed, so first responders and law enforcement and others don't need training from every Tom, Dick and Harry that decides they want to release their vehicles on the roads?I don't think it's crazy to expect that new technology will sometimes require first responders to get training. For example, when EVs were new, firefighters needed training on how battery fires differed from the gasoline-powered fires caused by ICE vehicles. Here, police officers need to know how you communicate with a vehicle that doesn't have a driver. Obviously, these companies should work hard to minimize the demands on first responders, but it's impossible to design an AV that works exactly like a human-driven car from the perspective of law enforcement.
Tesla, Cruise, and Zoox are also under NHTSA investigation. My sense is they are investigating everyone to cover their bases. Why would this be relevant?I get that ars loves to crap on musk (it gets oretty old), but an article like this not even mentioning that waymo is under a nhtsa investigation since a week or so ago is a little dishonest
It seems plausible to me that "am I at risk of a crash" would be an easier question to answer than "what's the best set of steps to take to avoid the crash." Anyway I think their results speak for themselves. Waymos have gotten into crashes far less often than human-driven cars on the same roads.That doesn't make sense - if it can predict a crash is coming, why can't it take steps to avoid the crash? And preemptively stopping may cause a crash (eg. rear ended), causing the very thing they're trying to prevent.
Having said that, it's good that they're being cautious - but if they don't have enough confidence in their systems such that human intervention is required when it thinks a crash is imminent, they don't belong on public roads IMHO. And that's Waymo - who are, by all accounts, lightyears ahead.... I can only shudder when imagining a Tesla robotaxi.
Remote intervention only works for things like post-crash moving the vehicle out of the way, or where the vehicle is stuck in some situation that it has no reference for and doesn't "know" what to do. It's not a way to avoid crashes and intervene - there's not enough time for that - even a human driver behind the wheel likely doesn't have enough time to intervene most times.
Sure, I can see how that could be correct. It just seems like a strange way to go - this approach is probably never going to be suitable for higher-speed driving (like highways or higher-speed major urban routes), but I guess they have to deal with where the technology is at, and it IS good they're being cautious and responsible, unlike others (Uber, Tesla, etc).It seems plausible to me that "am I at risk of a crash" would be an easier question to answer than "what's the best set of steps to take to avoid the crash." Anyway I think their results speak for themselves. Waymos have gotten into crashes far less often than human-driven cars on the same roads.
That doesn't make sense - if it can predict a crash is coming, why can't it take steps to avoid the crash? And preemptively stopping may cause a crash (eg. rear ended), causing the very thing they're trying to prevent.
Having said that, it's good that they're being cautious - but if they don't have enough confidence in their systems such that human intervention is required when it thinks a crash is imminent, they don't belong on public roads IMHO. And that's Waymo - who are, by all accounts, lightyears ahead.... I can only shudder when imagining a Tesla robotaxi.
Uh huh, and drivers always follow the code. They don't text behind the wheel, drive drunk, or under the influence of drugs or any other myriad reasons why humans often suck at driving.This is not true. Cars behind will maintain an appropriate stopping distance which allows them time to react, as mandated by the highway code.
Given the latest court cases in the news and here at ARS, it is just as likely crash data gets treated;There’s a section in here about how Waymo has less data than Tesla, but it’s higher quality because the Waymo safety driver/rote operator documents each disengagement.
I agree with the conclusion, but not necessarily how it got there. Tim anssumes Tesla is working with unlabeled disengagement data. I’d anssume differently—that everytime a Tesla autopilot disengages, it gets sent up into the cloud and over to India to be labeled. So Tesla probably does have a whole lot more labeled data.
Ok, but how good is Teslas labeling? You’d basically have to be able to label the disengagement as accidental or unneeded. Applying that label almost has to be quicker than analyzing a complex traffic scene. So incentives pile up for workers to reinforce the AI system’s confidence that it can handle situations it can’t. The way you fight this is with very good QC. Remind me, does Tesla have a history of good QC?
See, this is where I struggle. It's for the cars to manage the situation more than for first responders to learn how to co-exist with cars. If an officer is dealing with rapidly closing off a road due to a fatal collision that led to serious road hazards, they need to stop cars, make space for incoming emergency vehicles, and do it quickly. For officers to what... learn Tesla Sign Language? If an AV detects emergency lighting on the road, then it knows something is out of the norm. To me it's totally on the Teslas and Waymos of the world to sort this out.I don't think it's crazy to expect that new technology will sometimes require first responders to get training. For example, when EVs were new, firefighters needed training on how battery fires differed from the gasoline-powered fires caused by ICE vehicles. Here, police officers need to know how you communicate with a vehicle that doesn't have a driver. Obviously, these companies should work hard to minimize the demands on first responders, but it's impossible to design an AV that works exactly like a human-driven car from the perspective of law enforcement.
Such garbage, you've stuck around for thirteen years. TIL that facts = anti- Elon garbage.Carry that anti- Elon garbage ars....what a joke this website has become. Political instead of technical...= garbage.
I don’t see how Tesla’s “camera only” approach ever works. They need more sensors which Musk forced them to take out.
And “robotaxis” is not some liquid gold industry! It is a pretty niche business that “doesn’t scale” because the taxi business just isn’t that big.
Wait until you see how they handle even the slightest bit of criticism about itSuch garbage, you've stuck around for thirteen years. TIL that facts = anti- Elon garbage.![]()