A Florida man is facing 20 counts of obscenity for allegedly creating and distributing AI-generated child pornography, highlighting the danger and ubiquity of generative AI being used for nefarious reasons.

Phillip Michael McCorkle was arrested last week while he was working at a movie theater in Vero Beach, Florida, according to TV station CBS 12 News. A crew from the TV station captured the arrest, which made for dramatic video footage due to law enforcement leading away the uniform-wearing McCorkle from the theater in handcuffs.

  • Nollij@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    5
    ·
    4 months ago

    This creates a significant legal issue - AI generated images have no age, nor is there consent.

    The difference in appearance between age 16 and 18 is minimal, but the legal difference is immense. This is based entirely on a concept that cannot apply.

    How do you define what’s depicting a fictional child? Especially without including real adults? I’ve met people who believe that preferring a shaved pubic area is pedophilia. This is even though the vast majority of adult women do so. On the flip side, teenagers from the 70s and 80s would be mistaken for 40+ today.

    Even the extremes aren’t clear. Adult star “Little Lupe”, who was 18+ in every single appearance, lacked most secondary sex characteristics. Experts testified in court that she could not possibly be an adult. Except she was, and there’s full documentation to prove it. Would AI trained exclusively on her work be producing CSAM?

    • CeruleanRuin@lemmings.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      31
      ·
      edit-2
      4 months ago

      To paraphrase someone smarter than me, “I’ll know it when I see it.”

      But naturally I don’t want to see it. One of the things I miss the least about reddit is the constant image posts of anime characters, who may be whatever age they say but which are clearly representative of very young girls with big tiddies bolted on. It’s gross, but it is also a problem thatsl’s more widespread and nebulous than most people are willing to admit.

      • Xatolos@reddthat.com
        link
        fedilink
        arrow-up
        88
        arrow-down
        1
        ·
        4 months ago

        “I’ll know it when I see it.”

        I can’t think of anything scarier than that when dealing with the legality of anything.

        • lightnsfw@reddthat.com
          link
          fedilink
          arrow-up
          23
          ·
          4 months ago

          I’m nearly 40 and still regularly get carded while other people out with me do not so it’s not just “we card everyone”. People are bad at judging age.

          • LustyArgonian@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 months ago

            Are you in an age gap relationship? I find that my ID gets checked more if I’m in any kind of age gap, I assume due to curiosity. Do you get carded the same amount if you are alone?

            • lightnsfw@reddthat.com
              link
              fedilink
              arrow-up
              2
              ·
              4 months ago

              Nope I’m not in a relationship ship at all. This is while out with friends but I’m usually the first one there or with one other dude who is about the same age as me so nothing to speak of there. It happens regardless of if I’m alone or not. I got carded at the grocery store checkout last week and she seemed genuinely shocked at my age.

      • Nollij@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        4 months ago

        Just when trying to guess someone’s age (we’ll assume completely family-friendly and above board), think back to high school. How old did you and your peers look? Now go take a look at high schoolers today. They probably seem a lot younger than you did. The longer it’s been (i.e. the older you are), the younger they look. Which means, “when I see it” depends entirely on the age of the viewer.

        This isn’t even just about perception and memory- modern style is based on/influenced heavily by youth. It’s also continuing to move in the direction. This is why actors in their 30s - with carefully managed hair, skin, makeup, and wardrobe - have been able to convincingly portray high schoolers. This means that it’s not just you - teens really are looking younger each year. But they’re still the same age.

        • LustyArgonian@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 months ago

          Wtf. Style is what makes kids look young or old to us because we have been heavily marketed to and follow trends. That’s why when the mullet/porn stache style came back, those Dahmer kids looked in their 40s.

          You’re getting older each year so teens look younger to you.

          Name even one actor in their thirties who convincingly played a high schooler. Literally who

  • macniel@feddit.org
    link
    fedilink
    arrow-up
    86
    arrow-down
    24
    ·
    4 months ago

    I don’t see how children were abused in this case? It’s just AI imagery.

    It’s the same as saying that people get killed when you play first person shooter games.

    Or that you commit crimes when you play GTA.

        • timestatic@feddit.org
          link
          fedilink
          English
          arrow-up
          16
          arrow-down
          1
          ·
          4 months ago

          But this is the US… and its kind of a double standard if you’re not arrested for drawing but for generating it.

            • ContrarianTrail@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 months ago

              The core reason CSAM is illegal is not because we don’t want people to watch it but because we don’t want them to create it which is synonymous with child abuse. Jailing someone for drawing a picture like that is absurd. While it might be of bad taste, there is no victim there. No one was harmed. Using generative AI is the same thing. No matter how much simulated CSAM you create with it, not a single child is harmed in doing so. Jailing people for that is the very definition of a moral panic.

              Now, if actual CSAM was used in the training of that AI, then it’s a more complex question. However it is a fact that such content doesn’t need to be in the training data in order for it to create simulated CSAM and as long as that is the case it is immoral to punish people for creating something that only looks like it but isn’t.

                • ContrarianTrail@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 months ago

                  Sure, but same argument could be made of violent movies / games / books … It’s a rather slippery slope and as far as I know there doesn’t seem to be correlation between violent games and real life violence, in fact I believe the correlation is negative.

    • CeruleanRuin@lemmings.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      14
      ·
      edit-2
      4 months ago

      Not a great comparison, because unlike withh violent games or movies, you can’t say that there is no danger to anyone in allowing these images to be created or distributed. If they are indistinguishable from the real thing, it then becomes impossible to identify actual human victims.

      There’s also a strong argument that the availability of imagery like this only encourages behavioral escalation in people who suffer from the affliction of being a sick fucking pervert pedophile. It’s not methadone for them, as some would argue. It’s just fueling their addiction, not replacing it.

    • TallonMetroid@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      46
      ·
      4 months ago

      Well, the image generator had to be trained on something first in order to spit out child porn. While it may be that the training set was solely drawn/rendered images, we don’t know that, and even if the output were in that style, it might very well be photorealistic images generated from real child porn and run through a filter.

        • Saledovil@sh.itjust.works
          link
          fedilink
          arrow-up
          5
          ·
          4 months ago

          Wild corn dogs are an outright plague where I live. When I was younger, me and my buddies would lay snares to catch to corn dogs. When we caught one, we’d roast it over a fire to make popcorn. Corn dog cutlets served with popcorn from the same corn dog is popular meal, especially among the less fortunate. Even though some of the affluent consider it the equivalent to eating rat meat. When me pa got me first rifle when I turned 14, I spent a few days just shooting corn dogs.

        • emmy67@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          4 months ago

          It didn’t generate what we expect and know a corn dog is.

          Hence it missed because it doesn’t know what a “corn dog” is

          You have proven the point that it couldn’t generate csam without some being present in the training data

          • ContrarianTrail@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            edit-2
            4 months ago

            I hope you didn’t seriously think the prompt for that image was “corn dog” because if your understanding of generative AI is on that level you probably should refrain from commenting on it.

            Prompt: Photograph of a hybrid creature that is a cross between corn and a dog

            • emmy67@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              4 months ago

              Then if your question is “how many Photograph of a hybrid creature that is a cross between corn and a dog were in the training data?”

              I’d honestly say, i don’t know.

              And if you’re honest, you’ll say the same.

              • ContrarianTrail@lemm.ee
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                But you do know because corn dogs as depicted in the picture do not exists so there couldn’t have been photos of them in the training data, yet it was still able to create one when asked.

                This is because it doesn’t need to have been seen one before. It knows what corn looks like and it knows what a dog looks like so when you ask it to combine the two it will gladly do so.

                • emmy67@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  4 months ago

                  But you do know because corn dogs as depicted in the picture do not exists so there couldn’t have been photos of them in the training data, yet it was still able to create one when asked.

                  Yeah, except photoshop and artists exist. And a quick google image search will find them. 🙄

      • MagicShel@programming.dev
        link
        fedilink
        arrow-up
        41
        arrow-down
        3
        ·
        4 months ago

        An AI that is trained on children and nude adults can infer what a nude child looks like without ever being trained specifically with those images.

              • Cryophilia@lemmy.world
                link
                fedilink
                arrow-up
                3
                arrow-down
                1
                ·
                4 months ago

                No, I’m admitting they’re stupid for even bringing it up.

                Unless their argument is that all AI should be illegal, in which case they’re stupid in a different way.

                • LustyArgonian@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  edit-2
                  4 months ago

                  Do you think regular child porn should be illegal? If so, why?

                  Generally it’s because kids were harmed in the making of those images. Since we know that AI is using images of children being harmed to make these images, as the other posters has repeatedly sourced (but also if you’ve looked up deepfakes, most deepfakes are of an existing porn and the face just changed over top. They do this with CP as well and must use CP videos to seed it, because the adult model would be too large)… why does AI get a pass for using children’s bodies in this way? Why isn’t it immoral when AI is used as a middle man to abuse kids?

          • LustyArgonian@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            4 months ago

            Yes exactly. That people are then excusing this with “well it was trained on all.public images,” are just admitting you’re right and that there is a level of harm here since real materials are used. Even if they weren’t being used or if it was just a cartoon, the morality is still shaky because of the role porn plays in advertising. We already have laws about advertising because it’s so effective, including around cigarettes and prescriptions. Most porn, ESPECIALLY FREE PORN, is an ad to get you to buy other services. CP is not excluded from this rule - no one gets free lunch, so to speak. These materials are made and hosted for a reason.

            The role that CP plays in most countries is difficult. It is used for blackmail. It is also used to generate money for countries (intelligence groups around the world host illegal porn ostensibly “to catch a predator,” but then why is it morally okay for them to distribute these images but no one else?). And it’s used as advertising for actual human trafficking organizations. And similar organizations exist for snuff and gore btw. And ofc animals. And any combination of those 3. Or did you all forget about those monkey torture videos, or the orangutan who was being sex trafficked? Or Daisy’s Destruction and Peter Scully?

            So it’s important to not allow these advertisers to combine their most famous monkey torture video with enough AI that they can say it’s AI generated, but it’s really just an ad for their monkey torture productions. And even if NONE of the footage was from illegal or similar events and was 100% thought of by AI - it can still be used as an ad for these groups if they host it. Cartoons can be ads ofc.

      • lunarul@lemmy.world
        link
        fedilink
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        4 months ago

        we don’t know that

        might

        Unless you’re operating under “guilty until proven innocent”, those are not reasons to accuse someone.

    • KillerTofu@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      50
      ·
      4 months ago

      How was the model trained? Probably using existing CSAM images. Those children are victims. Making derivative images of “imaginary” children doesn’t negate its exploitation of children all the way down.

      So no, you are making false equivalence with your video game metaphors.

      • fernlike3923@sh.itjust.works
        link
        fedilink
        arrow-up
        47
        arrow-down
        1
        ·
        4 months ago

        A generative AI model doesn’t require the exact thing it creates in its datasets. It most likely just combined regular nudity with a picture of a child.

        • finley@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          21
          ·
          4 months ago

          In that case, the images of children were still used without their permission to create the child porn in question

          • MagicShel@programming.dev
            link
            fedilink
            arrow-up
            27
            arrow-down
            4
            ·
            4 months ago

            That’s not really a nuanced take on what is going on. A bunch of images of children are studied so that the AI can learn how to draw children in general. The more children in the dataset, the less any one of them influences or resembles the output.

            Ironically, you might have to train an AI specifically on CSAM in order for it to identify the kinds of images it should not produce.

            • finley@lemm.ee
              link
              fedilink
              English
              arrow-up
              6
              arrow-down
              19
              ·
              4 months ago

              Why does it need to be “ nuanced” to be valid or correct?

              • TheRealKuni@lemmy.world
                link
                fedilink
                English
                arrow-up
                28
                arrow-down
                2
                ·
                4 months ago

                Because the world we live in is complex, and rejecting complexity for a simple view of the world is dangerous.

                See You Can’t Get Snakes from Chicken Eggs from the Alt-Right Playbook.

                (Note I’m not accusing you of being alt-right. I’m saying we cannot ignore nuance in the world because the world is nuanced.)

                • finley@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  16
                  ·
                  4 months ago

                  We’re not talking about snakes or chicken eggs, but thanks for the strawman

          • fernlike3923@sh.itjust.works
            link
            fedilink
            arrow-up
            6
            arrow-down
            2
            ·
            4 months ago

            That’s a whole other thing than the AI model being trained on CSAM. I’m currently neutral on this topic so I’d recommend you replying to the main thread.

              • fernlike3923@sh.itjust.works
                link
                fedilink
                arrow-up
                12
                arrow-down
                2
                ·
                edit-2
                4 months ago

                It’s not CSAM in the training dataset, it’s just pictures of children/people that are already publicly available. This goes on to the copyright side of things of AI instead of illegal training material.

                • finley@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  29
                  ·
                  edit-2
                  4 months ago

                  It’s images of children used to make CSAM. No matter of your mental gymnastics can change that, nor the fact that those children’s consent was not obtained.

                  Why are you trying so hard to rationalize the creation of CSAM? Do you actually believe there is a context in which CSAM is OK? Are you that sick and perverted?

                  Because it really sounds like that’s what you’re trying to say, using copyright law as an excuse.

          • CeruleanRuin@lemmings.world
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            5
            ·
            4 months ago

            Good luck convincing the AI advocates of this. They have already decided that all imagery everywhere is theirs to use however they like.

      • macniel@feddit.org
        link
        fedilink
        arrow-up
        21
        arrow-down
        1
        ·
        4 months ago

        Can you or anyone verify that the model was trained on CSAM?

        Besides a LLM doesn’t need to have explicit content to derive from to create a naked child.

        • KillerTofu@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          28
          ·
          4 months ago

          You’re defending the generation of CSAM pretty hard here in some vaguely “but no child we know of” being involved as a defense.

          • macniel@feddit.org
            link
            fedilink
            arrow-up
            12
            arrow-down
            2
            ·
            4 months ago

            I just hope that the Models aren’t trained on CSAM. Making generating stuff they can fap on ““ethical reasonable”” as no children would be involved. And I hope that those who have those tendancies can be helped one way or another that doesn’t involve chemical castration or incarceration.

      • Diplomjodler@lemmy.world
        link
        fedilink
        arrow-up
        8
        ·
        4 months ago

        While i wouldn’t put it past Meta&Co. to explicitly seek out CSAM to train their models on, I don’t think that is how this stuff works.

      • grue@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        8
        ·
        4 months ago

        But the AI companies insist the outputs of these models aren’t derivative works in any other circumstances!

  • JaggedRobotPubes@lemmy.world
    link
    fedilink
    English
    arrow-up
    68
    arrow-down
    11
    ·
    4 months ago

    Do we know that AI child porn is bad? I could believe it would get them in the mood for the real thing and make them do it more, and I could believe it would make them go “ok, itch scratched”, and tank the demand for the real stuff.

    Depending on which way it goes, it could be massively helpful for protecting kids. I just don’t have a sense for what the effect would be, and I’ve never seen any experts weigh in.

    • ObjectivityIncarnate@lemmy.world
      link
      fedilink
      arrow-up
      34
      arrow-down
      2
      ·
      4 months ago

      Do we know that AI child porn is bad? I could believe it would get them in the mood for the real thing and make them do it more, and I could believe it would make them go “ok, itch scratched”, and tank the demand for the real stuff.

      From bits/articles I’ve seen here and there over the years about other things that are kind of in the same category (porn comics with child characters in them, child-shaped sex dolls), the latter seems to be more the case.

      I’m reminded of when people were arguing that when Internet porn became widespread, the incidence of rape would go through the roof. And then literally the opposite happened. So…that pushes me toward hypothesizing that the latter is more likely to be the case, as well.

    • PhilMcGraw@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      4 months ago

      In Australia cartoon child porn is enforced in the same way as actual child porn. Not that it answers your question but it’s interesting.

      I’d imagine for your question “it depends”, some people who would have acted on their urges may get their jollies from AI child porn, others who have never considered being pedophiles might find the AI child porn (assuming legal) and realise it’s something they were into.

      I guess it may lower the production of real child porn which feels like a good thing. I’d hazard a guess that there are way more child porn viewers than child abusers.

      • redfellow@sopuli.xyz
        link
        fedilink
        arrow-up
        8
        ·
        4 months ago

        In Australia a 30 year old woman cannot be in the porn industry if she has small breasts. That, and the cartoon ban both seem like overcompensating.

    • Thespiralsong@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      4 months ago

      I seem to remember Sweden did a study on this, but I don’t really want to google around to find it for you. Good luck!

    • Cryophilia@lemmy.world
      link
      fedilink
      arrow-up
      14
      arrow-down
      3
      ·
      4 months ago

      Real question: “do we care if AI child porn is bad?” Based on most countries’ laws, no.

    • barsquid@lemmy.world
      link
      fedilink
      arrow-up
      16
      arrow-down
      6
      ·
      4 months ago

      I’d like to know what psychologists think about it. My assumption is the former, it escalates their fantasizing about it and makes them more likely to attack a child.

      There seems to be no way to conduct that experiment ethically, though.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      4 months ago

      There’s like a lot of layers to it.

      • For some, it might actually work in the opposite direction, especially if paried with the wrong kind of community around it. I used to moderate anime communities, the amount of loli fans wanting to lower the age of consent to 12 or even lower was way too high, but they only called people opposed to loli as “real predators”, because they liked their middle-school tier arguments (which just further polarized the fandom when the culture wars started).
      • Even worse might be the more realistic depictions might actually work against that goal, while with (most) loli stuff, at least it’s obvious it’s drawn.
      • An often overseen issue is, data laundering. Just call your real CP AI generated, or add some GAI artifacts to your collection. Hungary bans too realistic drawings and paintings of that kind, because people even did that with traditional means, by creating as realistic tracings as possible (the calling CP “artistic nudes” didn’t work out here at least).
    • mckean@programming.dev
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      There definitively is opportunity in controlled treatment. But I believe outside of that there are too many unknowns.

    • Maggoty@lemmy.world
      link
      fedilink
      arrow-up
      13
      arrow-down
      10
      ·
      4 months ago

      You’re missing the point. They don’t care what’s more or less effective for helping kids. They want to punish people who are different. In this case nobody is really going to step up to defend the guy for obvious reasons. But the motivating concept is the same for conservatives.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Wikipedia seems to suggest research is inconclusive whether consuming CSAM increases the likelihood of committing abuse.

    • Pankkake@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      25
      ·
      4 months ago

      Depending on which way it goes, it could be massively helpful for protecting kids

      Weeeelll, only until the AI model needs more training material…

      • Saledovil@sh.itjust.works
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        4 months ago

        You need more training material to train a new AI. Once the AI is there, it produce as many pictures as you want. And you can get good results even with models that can be run locally on a regular computer.

      • JaggedRobotPubes@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        4 months ago

        I’m not sure if that is how it would work? But this is exactly the kind of thinking we need. Effects: intended plus unintended equals ???

  • BonesOfTheMoon@lemmy.world
    link
    fedilink
    arrow-up
    49
    arrow-down
    2
    ·
    4 months ago

    Could this be considered a harm reduction strategy?

    Not that I think CSAM is good in any way, but if it saves a child would it be worthwhile? Like if these pedos were to use AI images instead of actual CSAM would that be any better?

    I’ve read that CSAM sites on the dark web number into the hundreds of thousands. I just wonder if it would be a less harmful thing since it’s such a problem.

    • RandomlyNice@lemmy.world
      link
      fedilink
      English
      arrow-up
      38
      arrow-down
      3
      ·
      4 months ago

      Many years ago (about 25) I read an article in a newspaper (idk the name, but it may have been the The Computer Paper, which is archived on line someplace}. This article noted that a study had been commissioned to show that cp access increases child abuse. The study seemed to show the opposite.

      Here’s the problem with even AI generated cp: It might lower abuse in the beginning, but with increased access it would ‘normalise’ the perception of such conduct. This would likely increase abuse over time, even involving persons who may not have been so inclined otherwise.

      This is all a very complex. A solution isn’t simple. Shunning things in anyway won’t help though, and that seems to be the current most popular way to deal with the issue.

      • Facebones@reddthat.com
        link
        fedilink
        arrow-up
        23
        arrow-down
        2
        ·
        4 months ago

        Actual pedophiles (a lot of CSA is abuse of power, not pedophilia - though to be clear fuck abusers either way) have a high rate of suicidal ideation because they think its as fucked up as everyone else. Of course we can’t just say “sure AI material is legal now” but I could imagine a regulated system accessed via doctors akin to how controlled substances work.

        People take this firm “kill em all” stance but these people just feel the way they do same as I do towards women or a gay man feels toward men. It just is what it is - we all generally agree gay isnt a choice and this is no different. As long as they dont act on it, I think we should be sympathetic and be open to helping them live a less tortured life.

        I’m not 100% saying this is how we do it, but we should be open to exploring the issue instead of full stop demonization.

        • HonorableScythe@lemm.ee
          link
          fedilink
          arrow-up
          11
          arrow-down
          1
          ·
          4 months ago

          Dan Savage coined the term “gold star pedophile” in a column years ago, referring to people who acknowledge their attraction to children but never act on it by harming a child or accessing CSAM. I do feel bad for these people because there are no resources to help them. The only way they can access actual therapeutic resources for their condition is by offending and going to jail. If the pedophile goes to a therapist and confesses attraction to children, therapists are mandated reporters and will assume they’re going to act on it. An article I read a few years back interviewed members of an online community of non-offending pedophiles who essentially made their own support group since no one else will help them, and nearly all research on them is from a forensic (criminal) context.

          There’s a pretty good article by James Cantor talking about dealing with pedophiles in a therapeutic context here.

          Don’t get me wrong - I think offenders need to be punished for what they do. I unfortunately have a former best friend who has offended. He’s no longer in my life and never will be again. But I think we could prevent offenders from reaching that point and hurting someone if we did more research and found ways to stop them before it happened.

      • Cryophilia@lemmy.world
        link
        fedilink
        arrow-up
        18
        ·
        4 months ago

        “Normalized” violent media doesn’t seem to have increased the prevalence of real world violence.

        • Spacehooks@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          I actually think video games reduce crime in general. Bad kids are now indoors getting thier thrills.

    • pregnantwithrage@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      4 months ago

      You would think so but you basically are making a patch work version of the illicit actual media so it’s a dark dark gray area for sure.

    • JovialMicrobial@lemm.ee
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      4 months ago

      I guess my question is does access to regular porn make people not want to have real sex with another person? Does it ‘scratch the itch’ so to speak? Could they go the rest of their life with only porn to satisfy them?

      It depends on the person. I feel like most people would be unsatisfied with only porn, but that’s just anecdotal.

      I honestly think ai generated csam isn’t something the world needs to be produced. It’s not contributing to society in any meaningful ways and pedophiles who don’t offend or hurt children need therapy, and the ones who do need jailtime(and therapy, but Im in the US so thats a whole other thing). They don’t ‘need’ porn.

      My own personal take is that giving pedophiles csam that’s AI generated is like showing alcohol ads to alcoholics. Or going to the strip club if you’re a sex addict. It’s probably not going to lead to good outcomes.

    • xta@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      4 months ago

      by the same metric, i wonder why not let convicts murderers and psichopaths work at Slaughterhouses

    • Phoenixz@lemmy.ca
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      If from now on all child porn would be created artificially instead of by abusing children, wouldn’t that maybe be a good thing?

      Not trying to defend anything here, but where there is a want in the world, there is a market, you can’t stop that. If artificial material makes it that even one less child is abused, I think it’s worth having a discussion at least

  • hexdream@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    4 months ago

    If this thread (and others like it) have taught me aulnything is that facts be damned, people are opinionated either way. Nuance means nothing and it’s basically impossible to have a proper discussion when it comes to wedge issues or anything that can be used to divide people. Even if every study 100% said Ai generated csam always led to a reduction in actual child harm and reduced recidivism and never needed any actual real children to be used as training material, the comments would still pretty much look the same. If the studies showed the exact opposite, the comments would also be the same. Welcome to the internet. I hope you brought aspirin.

    • Eezyville@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 months ago

      My man. Go touch some grass. This place is no good. Not trying to insult you but it’s for your mental health. These Redditors aren’t worth it.

      • SynopsisTantilize@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        A lot of the places I’ve been to start conversation have been hostile and painful. If there is one thing that stands out that’s holding Lemmy back it’s the shitty culture this place can breed.

        • NauticalNoodle@lemmy.ml
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          4 months ago

          I’m convinced that a lot can be inferred from the type of reactions and the level of hostility one might receive by trying to present a calm and nuanced argument to a wedge topic. Even if it’s not always enjoyable. At the very least it also shows others that they may not be interacting rational actors when one gets their opponents to go full mask-off.

          • SynopsisTantilize@lemm.ee
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            Agreed. And I’ve had my share of “being a dick” on the Internet here. But by the end of the interaction I try to at least jest. Or find a middle ground…I commented on a Hexbear instance by accident once…

          • SynopsisTantilize@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            I accidentally went to Hexbear the other day… But yea I guess. Just wish there was more participation and less negativity

    • NauticalNoodle@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      4 months ago

      I was hoping to comment on this post multiple times today after I initially lost track of It and now I see you’ve covered about 75% of what I wanted to say. I’ll post the rest elsewhere out of politeness. Thank you

  • recapitated@lemmy.world
    link
    fedilink
    arrow-up
    25
    arrow-down
    6
    ·
    edit-2
    4 months ago

    To be clear, I am happy to see a pedo contained and isolated from society.

    At the same time, this direction of law is something that I don’t feel I have the sophistication to truly weigh in on, even though it invokes so many thoughts for me.

    I hope we as a society get this one right.

  • hightrix@lemmy.world
    link
    fedilink
    arrow-up
    16
    arrow-down
    3
    ·
    4 months ago

    He wasn’t arrested for creating it, but for distribution.

    If dude just made it and kept it privately, he’d be fine.

    I’m not defending child porn with this comment.

  • NauticalNoodle@lemmy.ml
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    4 months ago

    Show me multiple (let’s say 3+) small-scale independent academic studies or 1-2 comprehensive & large academic studies that support one side or another and I may be swayed, Otherwise I think all that is being accomplished is that one guys life is getting completely ruined for now and potentially forever over some fabrications and as a result he may or may not get help, but I doubt he’ll be better off.

    —My understanding was that csam has it’s legal status specifically because there are victims that are hurt by these crimes and possession supports a broader market that faciltates said harm to these victims. It’s not as easy to make a morality argument (especially a good one) for laws that effect everybody when there are no known victims.

    • emmy67@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      7
      ·
      4 months ago

      Are you stupid? Something has to be in the training model for any generation to be possible. This is just a new way to revitalise kids

      • NauticalNoodle@lemmy.ml
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        4 months ago

        So are you suggesting they can get an unaltered facial I.D. of the kids in the images? —Because that makes it regular csam with a specific victim (as mentioned), not an ai generative illustration.

        • emmy67@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          6
          ·
          4 months ago

          No, I am telling you csam images can’t be generated by an algorithm that hasn’t trained on csam

          • NauticalNoodle@lemmy.ml
            link
            fedilink
            arrow-up
            6
            arrow-down
            2
            ·
            edit-2
            4 months ago

            That’s patently false.

            I’m not going to continue to entertain this discussion but instead I’m just going to direct you to the multiple other people who have already effectively disproven this argument and similar arguments elsewhere in this post’s discusion. Enjoy.

      • ameancow@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Not necessarily, AI can do wild things with combined attributes.

        That said, I do feel very uncomfortable with the amount of defense of this guy, he was distributing this to people. If he was just generating fake images of fake people using legal training data in his own house for his own viewing, that would be a different story. The amount of people jumping in front of the bullet for this guy when we don’t really know the details is the larger problem.

  • spicystraw@lemmy.world
    link
    fedilink
    arrow-up
    22
    arrow-down
    15
    ·
    4 months ago

    I must admit, amount of comments that are defending AI images as not child porn is truly shocking.

    In my book, sexual images of children are not okay, AI generated or otherwise. Pedophiles need help, counseling and therapy. Not images that enable something I think is not acceptable in society.

    I truly do believe that AI images should be subject to same standards as regular images in what content we deem appropriate or not.

    Yes, this can be used to wrongfully prosecute innocent people, but it does not mean that we should freely allow AI-CP.

    • 31337@sh.itjust.works
      link
      fedilink
      arrow-up
      11
      ·
      4 months ago

      I generally think if something is not causing harm to others, it shouldn’t be illegal. I don’t know if “generated” CSAM causes harm to others though. I looked it up and it appears the research on whether CSAM consumption increases the likelihood of a person committing child abuse is inconclusive.

    • filcuk@lemmy.zip
      link
      fedilink
      arrow-up
      9
      arrow-down
      3
      ·
      edit-2
      4 months ago

      Agreed, especially considering it will eventually become indistinguishable.

    • OutsizedWalrus@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      4
      ·
      4 months ago

      You’re not kidding.

      The only possible way I could see a defense if it were something like “AI CSAM results in a proven reduction of actual CSAM”.

      But. The defenses aren’t even that!

      They’re literally saying that CSAM is okay. I’m guessing a lot of these same comments would argue that deepfakes are okay as well. Just a completely fucked up perspective.

    • Landless2029@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      6
      ·
      4 months ago

      Cant speak for others but I agree that AI-CP should be illegal.

      The question is how do we define the crime with our current laws? It does seem like we need a new law to address AI images. Both for things like AI-CP, revenge porn, and slanderous/misleading photos. (The Communist Harris and Trump with black people photos)

      Where do we draw the line?
      How do we regulate it?
      Forced watermarks/labels on all tools?
      Jail time? Fines?
      Forced correction notices? (Doesn’t work for the news!)

      This is all a slippery slope but what I can say is I hope this goes to court. He looses. Appeals. Then it goes all the way up to federal so we can have a standard to point to.

      The shit wrong.
      Step one in fixing shit.

    • WormFood@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      12
      ·
      4 months ago

      the number of people willing to bat for this on Lemmy is truly disturbing. what do they think these ai models are trained on?

      • ZeroHora@lemmy.ml
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        4 months ago

        No necessarily is trained on CP, could be trained with images of children (already fuck up, who gave them that permission?) and pornography.

        • kaffiene@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          2
          ·
          4 months ago

          The article pointed out that stable diffusion was trained using a dataset containing CSAM

  • BilboBargains@lemmy.world
    link
    fedilink
    arrow-up
    14
    arrow-down
    9
    ·
    4 months ago

    If no children were involved in the production of porn, how is it pedophilic? That’s like claiming a picture of water has the same properties as water.

    • derpgon@programming.dev
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      4 months ago

      However, a picture of water makes me thirsty. But then again, there is no substitute for water.

      I am not defending pedos, or defending Florida for doing something like that.

      • Sarmyth@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        That might be a you thing. Pictures of water dont make me thirsty. I get the metaphor you are attempting to make though.

    • Revan343@lemmy.ca
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      4 months ago

      It’s pedophillic because it’s sexual images of children; fake or not does not change that. Drawn images of child pornography are still pedophillic.

      The more important question is, is it CSAM? Whether drawn images that represent no real child are or not depends on the legal jurisdiction. Should drawn and AI generated child porn be legal or banned? I think the actual answer to that would require significant research into whether its existence acts as an outlet to prevent pedophiles from harming actual children, or whether it encourages their proclivities and makes them more likely to hurt actual children.

      Preventing harm to children should be the goal, but actual research of the effects of simulated child porn vis-a-vis pedophiles harming actual children is as unlikely to happen as any other research into pedophilia