If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • AbouBenAdhem@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    17 days ago

    Maybe each camera could have a unique private key that it could use to watermark keyframes with a hash of the frames themselves.

    • MoonManKipper@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      17 days ago

      I think that’s exactly how it’s going to work - you can’t force all ‘fake’ sources to have signatures- it’s too easy to make one without one for malicious reasons. Instead you have to create trusted sources of real images. Much easier and more secure

    • OsrsNeedsF2P@lemmy.ml
      link
      fedilink
      arrow-up
      5
      arrow-down
      2
      ·
      17 days ago

      Usually I see non-technical people throw ideas like this and they’re stupid, but I’ve been thinking about this for a few minutes and it’s actually kinda smart