Self-driving cars are often marketed as safer than human drivers, but new data suggests that may not always be the case.
Citing data from the National Highway Traffic Safety Administration (NHTSA), Electrek reports that Tesla disclosed five new crashes involving its robotaxi fleet in Austin. The new data raises concerns about how safe Tesla’s systems really are compared to the average driver.
The incidents included a collision with a fixed object at 17 miles per hour, a crash with a bus while the Tesla vehicle was stopped, a crash with a truck at four miles per hour, and two cases where Tesla vehicles backed into fixed objects at low speeds.



It’s important to draw the line between what Tesla is trying to do and what Waymo is actually doing. Tesla has a 4x higher rate, but Waymo has a lower rate.
Not just lower, a tiny fraction of the human rate of accidents:
https://waymo.com/safety/impact/
Also, AFAIK this includes cases when the Waymo car isn’t even slightly at fault. Like, there have been 2 deaths involving a Waymo car. In one case a motorcyclist hit the car from behind, flipped over it, then was hit by another car and killed. In the other case, ironically, the real car at fault was a Tesla being driven by a human who claims he experienced “sudden unintended acceleration”. It was driving at 98 miles per hour in downtown SF and hit a bunch of stopped cars at a red light, then spun into oncoming traffic and killed a man and his dog who were in another car.
Whether or not self-driving cars are a good thing is up for debate. But, it must suck to work at Waymo and to be making safety a major focus, only to have Tesla ruin the market by making people associate self-driving cars with major safety issues.
https://www.iihs.org/research-areas/fatality-statistics/detail/state-by-state
Well, no. Lets talk fatality rate. According to linked data, human drivers
Vs Waymo 2 deaths per 127 million miles :)
Well, Waymo’s really at 0 deaths per 127 million miles.
The 2 deaths are deaths that happened were near Waymo cars in a collision involving the Waymo car. Not only did the Waymo not cause the accidents, they weren’t even involved in the fatal part of either event. In one case a motorcyclist was hit by another car, and in the other one a Tesla crashed into a second car after it had hit the Waymo (and a bunch of other cars).
The IIHS number takes the total number of deaths in a year, and divides it by the total distance driven in that year. It includes all vehicles, and all deaths. If you wanted the denominator to be “total distance driven by brand X in the year”, you wouldn’t keep the numerator as “all deaths” because that wouldn’t make sense, and “all deaths that happened in a collision where brand X was involved as part of the collision” would be of limited usefulness. If you’re after the safety of the passenger compartment you’d want “all deaths for occupants / drivers of a brand X vehicle” and if you were after the safety of the car to all road users you’d want something like “all deaths where the driver of a brand X vehicle was determined to be at fault”.
The IIHS does have statistics for driver death rates by make and model, but they use “per million registered vehicle years”, so you can’t directly compare with Waymo:
https://www.iihs.org/ratings/driver-death-rates-by-make-and-model
Also, in Waymo it would never be the driver who died, it would be other vehicle occupants, so I don’t know if that data is tracked for other vehicle models.
When there’s two deaths total it’s pretty obvious that there just isn’t enough data yet to consider the fatal accident rate. Also FWIW like was said neither of those was in any way the Waymo’s fault.
I immediately formed a conspiracy theory that Teslas automatically accelerate when they see Waymo cars
And it’s not out of aggression. It’s just that their image recognition algorithms are so terrible that they match the Waymo car with all its sensors to a time-traveling DeLorean and try to hit 88 mph… or something.
Isn’t Waymo rate better because they are very particular where they operate? When they are asked to operate in sligthly less than perfect conditions it immediately goes downhill https://www.researchgate.net/publication/385936888_Identifying_Research_Gaps_through_Self-Driving_Car_Data_Analysis (page 7, Uncertainty)
Edit: googled it a bit, and apparently Waymo mostly drives in
Teslas do not.
We are talking about Tesla robotaxis. They certainly do drive in very limited geofenced areas also. While Waymo now goes on freeways only in the Bay Area with the option being offered to only some passengers Tesla Robotaxis do not go on any freeways ever currently. In fact they only have a handful of cars doing any unsupervised driving at all and those are geofenced in Austin to a small area around a single stretch of road.
Tesla Robotaxis currently also cease operations in Austin when it rains so Waymo definitely is the more flexible one when it comes to less than perfect conditions.
That is certainly true, but they are also better than humans in those specific areas. Tesla is (shockingly) stupid about where they choose to operate. Waymo understands their limitations and choose to only operate where they can be better than humans. They are increasing their range, though, including driving on the 405 freeway in Los Angeles… which is usually less than 35mph!!
Because Waymo uses more humans?
No, they don’t.
Because Waymo doesn’t try and do FSD with only cameras.
Are they doing FSD if there are human overseas? Surely that is not “fully”.
So human overseas and not only cameras.
All these services have the ability for a human to solve issues if the FSD disengages. Doesn’t mean they’re not driving on their own most of the time including full journeys. The remote assistant team is just ready to jump in if there’s something unusual that causes the Waymo driver to disengage and even then they don’t usually directly control the car, they just give the driver instructions on how to resolve the situation.