• einsteinx2@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 years ago

    How do you feel about the self driving car use-case? Say for example a self driving car has a 0.5% risk of an accident, and thus human harm, in it’s usage lifetime, but a human driver has a 5% risk of an accident (making numbers up for the sake of argument but let’s say the self driving car has a 0.1% chance of harm or greater but it’s still much lower than a human). Would you still be against the tech and ven though if we disallowed it there would statistically be more harm caused?

    • magic_lobster_party@kbin.social
      link
      fedilink
      arrow-up
      4
      ·
      3 years ago

      If it can be proven that it causes less accidents, maybe.

      My fear is that the accidents can be systematically triggered. For example, one particular curve the AI has trouble understanding. Or a person standing in one particular corner causes the AI to completely misrepresent the scene. Or one particular color of a car makes it confused.

    • mynameisbob@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      Well those numbers you came up with are not real. Also everytime the self driving cars screw up they pull back their usage to the point where they can still milk investors but the accidents decline. I think as they are introduced at a more rapid pace we will find that the issues become exponential. But thats just my opinion also life would be safer if a lived in a bubble but living in a bubble is not what I need. I need trains and public transit. I think maybe closley monitored AI on rail systems might over time become a great success.