doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?

This article even explicitely says as much.

My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.

  • BradleyUffner@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    4 days ago

    The AI can generate a picture of cows dancing with roombas on the moon. Do you think it was trained on images of cows dancing with roombas on the moon?

  • frightful_hobgoblin@lemmy.ml
    link
    fedilink
    arrow-up
    25
    ·
    5 days ago

    a GPT can produce things it’s never seen.

    It can produce a galaxy made out of dog food; doesn’t mean it was trained on pictures of galaxies made out of dog food.

  • justOnePersistentKbinPlease@fedia.io
    link
    fedilink
    arrow-up
    22
    ·
    5 days ago

    A fun anecdote is that when my friends and I tried the then brand new MS image gen AI built into Bing(for the purpose of a fake tinder profile, long story).

    The generator kept hitting walls because it had been fed so much porn that the model averaged women to be by default nude in images. You had to specify that what clothes a woman was wearing. Not even just “clothed”, then it defaulted to lingerie or bikinis.

    Not men though. Men it defaulted to being clothed.

  • Ragdoll X@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    4 days ago

    doesn’t it follow that AI-generated CSAM can only be generated if the AI has been trained on CSAM?

    Not quite, since the whole thing with image generators is that they’re able to combine different concepts to create new images. That’s why DALL-E 2 was able to create a images of an astronaut riding a horse on the moon, even though it never saw such images, and probably never even saw astronauts and horses in the same image. So in theory these models can combine the concept of porn and children even if they never actually saw any CSAM during training, though I’m not gonna thoroughly test this possibility myself.

    Still, as the article says, since Stable Diffusion is publicly available someone can train it on CSAM images on their own computer specifically to make the model better at generating them. Based on my limited understanding of the litigations that Stability AI is currently dealing with (1, 2), whether they can be sued for how users employ their models will depend on how exactly these cases play out, and if the plaintiffs do win, whether their arguments can be applied outside of copyright law to include harmful content generated with SD.

    My question is: why aren’t OpenAI, Google, Microsoft, Anthropic… sued for possession of CSAM? It’s clearly in their training datasets.

    Well they don’t own the LAION dataset, which is what their image generators are trained on. And to sue either LAION or the companies that use their datasets you’d probably have to clear a very high bar of proving that they have CSAM images downloaded, know that they are there and have not removed them. It’s similar to how social media companies can’t be held liable for users posting CSAM to their website if they can show that they’re actually trying to remove these images. Some things will slip through the cracks, but if you show that you’re actually trying to deal with the problem you won’t get sued.

    LAION actually doesn’t even provide the images themselves, only linking to images on the internet, and they do a lot of screening to remove potentially illegal content. As they mention in this article there was a report showing that 3,226 suspected CSAM images were linked in the dataset, of which 1,008 were confirmed by the Canadian Centre for Child Protection to be known instances of CSAM, and others were potential matching images based on further analyses by the authors of the report. As they point out there are valid arguments to be made that this 3.2K number can either be an overestimation or an underestimation of the true number of CSAM images in the dataset.

    The question then is if any image generators were trained on these CSAM images before they were taken down from the internet, or if there is unidentified CSAM in the datasets that these models are being trained on. The truth is that we’ll likely never know for sure unless the aforementioned trials reveal some email where someone at Stability AI admitted that they didn’t filter potentially unsafe images, knew about CSAM in the data and refused to remove it, though for obvious reasons that’s unlikely to happen. Still, since the LAION dataset has billions of images, even if they are as thorough as possible in filtering CSAM chances are that at least something slipped through the cracks, so I wouldn’t bet my money on them actually being able to infallibly remove 100% of CSAM. Whether some of these AI models were trained on these images then depends on how they filtered potentially harmful content, or if they filtered adult content in general.

  • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    14
    ·
    5 days ago

    If AI spits out stuff it’s been trained on

    For Stable Diffusion, it really doesn’t just spit out what it’s trained on. Very loosely, it starts with white noise, then adds noise and denoises the result based on your prompt, and it keeps doing this until it converges to a representation of your prompt.

    IMO your premise is closer to true in practice, but still not strictly true, about large language models.

    • notfromhere@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      It’s akin to virtually starting with a block of marble and removing every part (pixel) that isn’t the resulting image. Crazy how it works.

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    arrow-up
    14
    ·
    5 days ago

    The article is bullshit that wants to stir shit up for more clicks.

    You don’t need a single CSAM image to train AI to make fake CSAM. In fact, if you used the images from the database of known CSAM, you’d get very shit results because most of them are very old and thus the quality most likely sucks.

    Additionally, in another comment you mention that it’s users training their models locally, so that answers your 2nd question of why companies are not sued: they don’t have CSAM in their training dataset.

  • LovableSidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    3 days ago

    My question is why this logic doesn’t apply to anybody who learns anything and goes on to use that knowledge in their work without explicit permission. For example, authors generally learn to be good authors by reading the work of other good authors. Do they morally owe all past authors a share of whatever money they make?

    • ByteJunk@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      3 days ago

      End stage capitalism of the brain, all your ideas are ours and you owe us money for thinking them.

      What a great idea there bud

      • LovableSidekick@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 days ago

        “All your ideas are ours and you owe us money for thinking them” is actually a good summary of anti-AI sentiment.

        • Dkarma@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          “pay me cuz your robot looked at the content I put on the internet and made free to view!!!”

  • Free_Opinions@feddit.uk
    link
    fedilink
    arrow-up
    9
    ·
    5 days ago

    First of all, it’s by definition not CSAM if it’s AI generated. It’s simulated CSAM - no people were harmed doing it. That happened when the training data was created.

    However it’s not necessary that such content even exists in the training data. Just like ChatGPT can generate sentences it has never seen before, image generators can also generate pictures it has not seen before. Ofcourse the results will be more accurate if that’s what it has been trained on but it’s not strictly necessary. It just takes a skilled person to write the prompt.

    My understanding is that the simulated CSAM content you’re talking about has been made by people running their software locally and having provided the training data themselves.

    • Buffalox@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      5 days ago

      First of all, it’s by definition not CSAM if it’s AI generated. It’s simulated CSAM

      This is blatantly false. It’s also illegal and you can go to prison for owning selling or making child Lolita dolls.

      I don’t know why this is the legal position in most places. Because as you mention no one is harmed.

        • Buffalox@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          edit-2
          5 days ago

          CSAM = Child sexual abuse material
          Even virtual material is still legally considered CSAM in most places. Although no children were hurt, it’s a depiction of it, and that’s enough.

          • Free_Opinions@feddit.uk
            link
            fedilink
            arrow-up
            4
            arrow-down
            3
            ·
            5 days ago

            Being legally considered CSAM and actually being CSAM are two different things. I stand behind what I said which wasn’t legal advise. By definition it’s not abuse material because nobody has been abused.

            • Buffalox@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              5
              ·
              edit-2
              5 days ago

              There’s a reason it’s legally considered CSAM. as I explained it is material that depicts it.
              You can’t have your own facts, especially not contrary to what’s legally determined, because that means your definition or understanding is actually ILLEGAL!! If you act based on it.

              • Free_Opinions@feddit.uk
                link
                fedilink
                arrow-up
                4
                arrow-down
                2
                ·
                edit-2
                5 days ago

                I already told you that I’m not speaking from legal point of view. CSAM means a specific thing and AI generated content doesn’t fit under this definition. The only way to generate CSAM is by abusing children and taking pictures/videos of it. AI content doesn’t count any more than stick figure drawings do. The justice system may not differentiate the two but that is not what I’m talking about.

                • Buffalox@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  edit-2
                  5 days ago

                  The only way to generate CSAM is by abusing children and taking pictures/videos of it.

                  Society has decided otherwise, as I wrote, you can’t have your own facts or definitions. You might as well claim that in traffic red means go, because you have your own interpretation of how traffic lights should work.
                  Red is legally decided to mean stop, so that’s how it is, that’s how our society works by definition.

    • ExtremeDullard@lemmy.sdf.orgOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      6
      ·
      5 days ago

      it’s by definition not CSAM if it’s AI generated

      Tell that to the judge. People caught with machine-made imagery go to the slammer just as much as those caught with the real McCoy.

  • ChaoticNeutralCzech@feddit.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    5 days ago

    It probably won’t yield good results for the literal query “child porn” because such content on the open web is censored, but I’m pretty sure degenerates know workarounds such as “young, short, naked, flat chested, no pubic hair”, all of which exist plentifully in isolation. Just my guess, I haven’t tried of course.

  • YungOnions@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    5 days ago

    Sexton says criminals are using older versions of AI models and fine-tuning them to create illegal material of children. This involves feeding a model existing abuse images or photos of people’s faces, allowing the AI to create images of specific individuals. “We’re seeing fine-tuned models which create new imagery of existing victims,” Sexton says. Perpetrators are “exchanging hundreds of new images of existing victims” and making requests about individuals, he says. Some threads on dark web forums share sets of faces of victims, the research says, and one thread was called: “Photo Resources for AI and Deepfaking Specific Girls.”

    The model hasn’t necessarily been trained with CSAM, rather you can create things called LORAs which help influence the image output of a model so that it’s better at producing very specific content that it may have struggled with before. For example I downloaded some recently that help Stable Diffusion create better images of Battleships from Warhammer 40k. My guess is that criminals are creating their own versions for kiddy porn etc.

  • southsamurai@sh.itjust.works
    link
    fedilink
    arrow-up
    5
    ·
    5 days ago

    I think you misunderstand what’s happening.

    It isn’t that, as an example to represent the idea, openai is training their models on kiddie porn.

    It’s that people are taking ai software, and then training it on their existing material. The wired article even specifically says they’re issuing older versions of the software to bypass safeguards that are in place to prevent it now.

    This isn’t to say that any of the companies involved in offering generative software don’t have such imagery in the data used to train their models. But they wouldn’t have to possess it for it to be in there. Most of those assholes just grabbed giant datasets and plugged them in. They even used scrapers for some of it. So all it would take is them accessing some of it unintentionally for their software to end up able to generate new material. They don’t need to store anything once the software is trained.

    Currently, none of them lack some degree of prevention in their products to prevent it being used for that. How good those protections are, I have zero clue. But they’ve all made noises about it.

    But don’t forget, one of the earlier iterations of software designed to identify kiddie porn was trained on seized materials. The point of that is that there are exceptions to possession. The various agencies that investigate sexual abuse of minors tend to keep materials because they need it to track down victims, have as evidence, etc. It’s that body of data that made detection something that can be automated. While I have no idea if it happened, it wouldn’t be surprising if some company or another did scrape that data at some point. That’s just a tangent rather than part of your question.

    So, the reason that they haven’t been “sued” is that they likely don’t have any materials to be “sued” for in the first place.

    Besides, not all generated materials are made based on existing supplies. Some of it is made akin to a deepfake, where someone’s face is pasted onto a different body. So, they can take materials of perfectly legal adults that look young, slap real or fictional children’s faces onto them, and have new stuff to spread around. That doesn’t require any original material at all. You could, as I understand it, train an generative model on that and it would turn out realistic fully generative materials. All of that is still illegal, but it’s created differently.

  • DragonsInARoom@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    I would imagine that ai generated csam can be “had” in big tech ai in two ways: contamination, and training from an analog. Contamination would be the training passes of the ai using the data being introduced into an uncontaminated training pool. (Not introducing raw csam material). Training from analogous data is what the name states, get as close to the csam material as possible without raising eyebrows. Or the criminals could train off of “fresh” unknown to lawenforcment csam.

  • Battle Masker@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    5 days ago

    those are big companies. they have more legal protection than anyone in the world. and money if judges/law enforcement still consider moving a case forward

  • bokherif@lemmy.world
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    5 days ago

    Grok literally says it would protect 1 jewish person’s life over 1 million non-jewish people. Wonder what they are training that shit on lol.