Nucleo’s investigation identified accounts with thousands of followers with illegal behavior that Meta’s security systems were unable to identify; after contact, the company acknowledged the problem and removed the accounts

    • AutomaticButt@lemm.ee
      link
      fedilink
      English
      arrow-up
      20
      arrow-down
      2
      ·
      11 days ago

      The most compelling argument against AI generated child porn I have heard is it normalizes it and makes it more likely people will be unable to tell if it is real or AI. This allows actual children to get hurt when it is not reported or skimmed over because someone thought it was AI.

      • yyprum@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        11 days ago

        As a counterpart, the fact that it is so easy and simple to get those AI images, compared to the risk and extra effort of doing it for real, could make the actual child abuse become less common and less profitable for mafias and assholes in general. It’s a really complex topic that no simple straight answer would solve.

        Normalising it would be horrible and should be avoided, but there will always be some amount of people looking for that content. I rather have them using AI to create it than having to go searching for real content. Persecuting the AI content is not only very inefficient, it might also be harmful as the only other content left would be the real one that is much harder to catch those who make it.

      • Cryophilia@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        3
        ·
        11 days ago

        With a set of all images on the internet. Why do you people always think this is a “gotcha”?

          • surewhynotlem@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            2
            ·
            11 days ago

            Hey.

            I’ve been in tech for 20 years. I know python, Java, c#. I’ve worked with tensorflow and language models. I understand this stuff.

            You absolutely could train an AI on safe material to do what you’re saying.

            Stable diffusion and openai have not guaranteed that they trained their AI on safe materials.

            It’s like going to buy a burger, and the restaurant says “We can’t guarantee there’s no human meat in here”. At best it’s lazy. At worst it’s abusive.

            • Captain Aggravated@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              11
              ·
              11 days ago

              I mean, there is no photograph of a school bus with pegasus wings diving to the titanic, but I bet one of these AIs can crank out that picture. If it can do that…?

            • Cryophilia@lemmy.world
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              2
              ·
              11 days ago

              Ok, but by that definition Google should be banned because their trawler isn’t guaranteed to not pick up CP.

              In my opinion, if the technology involves casting a huge net, and then creating an abstracted product from what is caught in the net, with no steps in between seen by a human, then is it really causing any sort of actual harm?