Microsoft CEO calls for tech industry to ‘act’ after AI photos of Taylor Swift circulate X::Satya Nadella spoke to Lester Holt about artificial intelligence and its ability to create deepfake images of others. After pictures of Taylor Swift circulated, he called for actions

  • EfreetSK@lemmy.world
    link
    fedilink
    English
    arrow-up
    126
    arrow-down
    9
    ·
    edit-2
    10 months ago

    AI generation can be used for disinformation which can literally destabilize or right away end the world as we know it.

    But fake Taylor Swift pictures, this is where we draw the line …

  • bloopernova@programming.dev
    link
    fedilink
    English
    arrow-up
    65
    arrow-down
    8
    ·
    10 months ago

    "TAYLOR SWIFT WAS A LINE THAT SHOULD NOT HAVE BEEN CROSSED.

    PREPARE TO REAP THE WHIRLWIND!"

    – The Whitehouse, apparently.

    • xor@infosec.pub
      link
      fedilink
      English
      arrow-up
      47
      arrow-down
      7
      ·
      10 months ago

      i will never understand how taylor swift became this super duper billionaire royalty who i have to hear about every day now…

        • xor@infosec.pub
          link
          fedilink
          English
          arrow-up
          35
          arrow-down
          1
          ·
          10 months ago

          i understand how people like her, but not the other part… as in, the thing i was actually talking about

    • deweydecibel@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      16
      ·
      edit-2
      10 months ago

      So we’re just going to pretend this is only about Taylor Swift, are we? Makes the jokes easier, I guess.

      The subject being Taylor Swift just made the issue more visible than normal. It’s not specifically about being upset it happened to her.

      The press secretary literally said it was about women in general being the targets of abuse. All that happened here was that this got the attention of more people than normal, so the white house used that opportunity to make a statement on it.

      • SkyNTP@lemmy.ml
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        2
        ·
        10 months ago

        The fact that a celebrity was the line being crossed is a symptom of a major societal sickness.

      • SkyezOpen@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        arrow-down
        1
        ·
        10 months ago

        There was an incident where a high school girl had ai porn of her passed around the school. It made national news. That wouldve been good enough, but no, Taylor swift is when we make a stand.

      • shalafi@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        13
        ·
        10 months ago

        Imagine how many people are not know-it-all lemmings. “I could have done this in Photoshop 10-years ago! Nothing new!” Well, it’s new to a shitload of people today.

        There’s already a bunch of idiot comments about revenge porn, which this is not, but let’s muddy the waters shall we? Get people talking about revenge porn and people will assume Swift took nudes that got loose. And to many of those people, that would be her fault.

        Our reaction should be, “Great news! Now this sort of thing has reached a mainstream audience and hit so hard the White House is having a say!”

        Nope, but not around here, not good enough. See, we gotta feel technically superior, because this isn’t news to us. We gotta feel morally superior, uh, because… What exactly were all you loud mouths doing about this thing before yesterday?

        Thanks for the sane take.

  • Cosmicomical@lemmy.world
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    10 months ago

    Isn’t this something that could have been done with photoshop in 30 minutes? What’s the difference when the result could have been almost perfect just as easily?

    Ps. Haven’t seen the images being discussed, and this is even more alarming given legislation could be passed based on images you’re not even morally allowed to review. It could all be fictional and I would never even know.

    • beebarfbadger@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      10 months ago

      WON’T SOMEBODY THINK OF THE CHILDREN!!! I am clearly loudly screaming about the children, so anybody who is against me with their logic or reasoning or well-founded objections must then obviously be AGAINST the children because these are the only two (2) options that exist. So you better shut up and take whatever abuse I can dish out because you don’t wanna look like a terrorist! Or a pedophile? I forget what boogeyman we’re currently using as an excuse to get people to agree to taking away their rights.

  • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    4
    ·
    edit-2
    10 months ago

    Didn’t we all see this coming? Porn deepfakes were already a thing, and even before generative AI we already had people photoshop women in explicit situations.

    I’d even say that right now we have much better tools to deal with the fakes than before AI, and all that is required is legislative action.

    The tech is already capable of doing automatic facial recognition at scale and we could give victims the tools to automatically send take-down notices and have them enforced.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      10 months ago

      We saw it coming, we Technorati. The average citizen knows exactly enough tech to perform their job, Facebook and email, nothing more.

      You and I are on lemmy. My page is flooded with Linux posts and memes. The vast majority of people have, at best, a nebulous understanding of what it is. I’ll even back up and say the majority have heard of it, but can’t identify it as an operating system. Hell, I back up further! Most people can’t give an ELI5 as to what on OS is, let alone give an example.

      This is news to most. And because it hit a wildly popular star, a star who’s shown herself to, at least, seem like a great person, no drama, no immorality, no bullshit, this thing seems so much more unfair and worthy of attention.

      I’m not sure how we legislate this sort of thing. Sounds like you got ideas?

  • Buttons@programming.dev
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    10 months ago

    “Allowing entities other than us to control AI is dangerous. We must act!”

    – Microsoft probably

    I have no problem using the law to stop abusive deep fakes, but I do have a problem using the law to take AI away from regular people. Regular people need to be able to run their own AIs. All the worst outcomes involve taking AI away from regular people first.

  • Grimy@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    4
    ·
    edit-2
    10 months ago

    I think this kind of stuff should be treated as revenge porn, and Twitter should absolutely get sued for letting it go on.

    An other article mentioned it was up for 17 hours and had 45 million view. I find it hard to believe twitter didn’t know about it 15 minutes after it was posted. We also have the tech to know when an image is NSFW and when it includes certain celebrities.

    That being said, it shouldn’t matter if it was generated, photoshopped or drawn. This is going to get muddied and Microsoft absolutely has a horse in the race.

    Distribution must be legislated, not how it’s created. Microsoft and company want to cut our access to free AI and replace it with a subscription model. They are banking on an emotional response.

  • deweydecibel@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    1
    ·
    10 months ago

    Well of course he did. The way to fight this will involve letting tech companies implement more invasive data harvesti- sorry, “verifying methods” on users.

  • bean@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    10 months ago

    And ofc only do something because someone rich and wealthy had it happen. Not when we’ve said deepfakes were a prescience danger for literally years now.

  • Waldowal@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    10
    ·
    edit-2
    10 months ago

    Absolutely disgusting! Where would you even find that sort of filth? Be specific.

  • Daxtron2@startrek.website
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    3
    ·
    10 months ago

    It’s quite literally impossible to stop the creation of them at this point, doesn’t mean you can’t criminalize the distribution of it under revenge porn laws.

    • Cosmicomical@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      10 months ago

      How is that a solution given you can just as easily circulate a text prompt to generate them directly?

      • Daxtron2@startrek.website
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        10 months ago

        That wouldn’t be illegal though, you’d have to criminalize any brief description that includes nudity and a real person which is a clear violation of free speech.

    • CaptainSpaceman@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      10 months ago

      Except that ban would only affect citizens who could use such technology for profit, and big companies will absolutely be allowed to use them

    • xor@infosec.pub
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      16
      ·
      10 months ago

      yeah! and let’s ban all feelings of sadness too!
      (banning types of ai won’t do much)

        • xor@infosec.pub
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          2
          ·
          10 months ago

          ok:
          it would be equally as effective to attempt to ban either of them.

      • XEAL@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        4
        ·
        10 months ago

        Your comment has been raided by the swifties crew. Have a nice day.

  • romamix@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 months ago

    I guess nobody learned from the Russian troll factories: you spam everyone with fakes too much, and when a real thing is leaked nobody believes in it. I mean, let’s say , a 2025 iCloud leak has no chance to happen because the celebs call always tell it’s all the AI generated images, and there will be less efforts to distribute them.