Cruise robotaxi collides with fire truck in San Francisco, leaving one injured::A crash between Cruise robotaxi and a San Francisco Fire Department truck occurred last night in Tenderloin. The incident happed a week after the California Public Utilities Commission (CPUC) approved 24/7 autonomous taxi ride services.
I’ve had my entire career working within industrial automation and I see the value AI and automated efforts being to the world.
I do not see the value in allowing private companies to playtest autonomous driving with human life as a potential collateral.
The argument keeps getting made — “how many humans make that same mistake daily?” — and it’s not equivocal; if autonomous vehicles cannot reach a 100% safety and accuracy feature, they should not be allowed to risk human lives.
Don’t let perfection be the enemy of good. I’m not suggesting we’re don’t have a really high bar, but 100% is just unreasonable.
You’re arguing that even if autonomous vehicles are safer drivers than humans, we should choose to make ourselves less safe by disallowing them? Fuck that. Nobody should have to die because AI makes you squeamish.
Unnecessarily hostile comment, too bad that attitude didn’t stay with Reddit.
AI doesn’t make me squeamish at all. Ignoring the context in which I stayed my background with automation was a choice, but the tub is using the general public to beta test hazardous equipment. Humans make errors and can be held responsible; corporations putting people at risk for no responsibility is reckless.
The difference being that autonomous vehicles could reach 100% safety by removing all non autonomous vehicles from the road and imposing a communication standard between vehicles so they all know what the other vehicles are doing at all times.
That only applies to regions of the world where there’s no snow because autonomous driving in a snowstorm will probably never be solved.
I guarantee that it would still not be 100%. Maybe 99% or even 99.9%, but not 100%.