I think the interventions here are more like: “that’s a trash can someone pushed onto the road - let me help you around it” rather than: “let me drive you all the way to your destination.”
It’s usually not the genuinely hard stuff that stumps AI drivers - it’s the really stupid, obvious things it simply never encountered in its training data before.
Saw this blog post recently about waymo’s sim setup for generating synthetic data and they really do seem to be generating pretty much everything in existence. The level of generalization of the model they seem to be using is either shockingly low or they abort immediately at the earliest sign of high perplexity.
I’m guessing it’s the latter, they need to keep accidents to a minimum if they’re ever going to get broad legislation to legalise them.
Every single accident is analysed to death by the media and onlookers alike, with a large group of people wanting it to fail.
This is a prime example, we’ve known about the human intervention for a while now but period people seem surprised that those people are in another country.
Broadly speaking, an AI driver getting stumped means it’s stuck in the middle of the road - while a human driver getting stumped means plowing into a semi truck.
I’d rather be inconvenienced than killed. And from what I’ve seen, even our current AI drivers are already statistically safer than the average human driver - and they’re only going to keep getting better.
Ai drivers have run over and crushed people slowly before too though because they didn’t see the person as an “obstacle” to be avoided, or because they were on the ground, it didn’t see them
And they always will. You need to look at the big picture here, not individual cases. If we replaced every single car on US roads with one driven by AI - proven to be 10 times better a driver than a human - that would still mean 4,000 people getting killed by them each year. That, however, doesn’t mean we should go back to human drivers and 40,000 people killed annually.
current AI drivers are already statistically safer than
As long as they use level 3 autonomous cars and then cheat with remote operators instead of using real level 5 cars, such statistics remain quite meaningless.
However, they tell about the people who use them as arguments.
As the OP stated, the low velocity cases are not causing deadly accidents. And you can’t drive by wire at high speed (too much latency). So I doubt it’s affecting the stats in any meaningful way.
Honestly I much prefer they have a human as a backup than not.
I think the interventions here are more like: “that’s a trash can someone pushed onto the road - let me help you around it” rather than: “let me drive you all the way to your destination.”
It’s usually not the genuinely hard stuff that stumps AI drivers - it’s the really stupid, obvious things it simply never encountered in its training data before.
Saw this blog post recently about waymo’s sim setup for generating synthetic data and they really do seem to be generating pretty much everything in existence. The level of generalization of the model they seem to be using is either shockingly low or they abort immediately at the earliest sign of high perplexity.
I’m guessing it’s the latter, they need to keep accidents to a minimum if they’re ever going to get broad legislation to legalise them.
Every single accident is analysed to death by the media and onlookers alike, with a large group of people wanting it to fail.
This is a prime example, we’ve known about the human intervention for a while now but period people seem surprised that those people are in another country.
Feels like the robot hoovers when they encounter an unexpected poo.
Ancient texts show that robot hoovers did not have a means of intervention
Hm. Interesting. But that makes them look even mode incapable than I feared.
Broadly speaking, an AI driver getting stumped means it’s stuck in the middle of the road - while a human driver getting stumped means plowing into a semi truck.
I’d rather be inconvenienced than killed. And from what I’ve seen, even our current AI drivers are already statistically safer than the average human driver - and they’re only going to keep getting better.
They’ll never be flawless though. Nothing is.
Ai drivers have run over and crushed people slowly before too though because they didn’t see the person as an “obstacle” to be avoided, or because they were on the ground, it didn’t see them
And they always will. You need to look at the big picture here, not individual cases. If we replaced every single car on US roads with one driven by AI - proven to be 10 times better a driver than a human - that would still mean 4,000 people getting killed by them each year. That, however, doesn’t mean we should go back to human drivers and 40,000 people killed annually.
By that logic…
We should really be investing in trains and buses, not cars of any type.
As long as they use level 3 autonomous cars and then cheat with remote operators instead of using real level 5 cars, such statistics remain quite meaningless.
However, they tell about the people who use them as arguments.
As the OP stated, the low velocity cases are not causing deadly accidents. And you can’t drive by wire at high speed (too much latency). So I doubt it’s affecting the stats in any meaningful way.
Honestly I much prefer they have a human as a backup than not.