• helpImTrappedOnline@lemmy.world
    link
    fedilink
    English
    arrow-up
    128
    arrow-down
    2
    ·
    edit-2
    1 year ago

    The headline/title needs to be extended to include the rest of the sentence

    “and then sent them to a minor”

    Yes, this sicko needs to be punished. Any attempt to make him the victim of " the big bad government" is manipulative at best.

    Edit: made the quote bigger for better visibility.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      42
      ·
      1 year ago

      That’s a very important distinction. While the first part is, to put it lightly, bad, I don’t really care what people do on their own. Getting real people involved, and minor at that? Big no-no.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      25
      arrow-down
      2
      ·
      1 year ago

      All LLM headlines are like this to fuel the ongoing hysteria about the tech. It’s really annoying.

      • helpImTrappedOnline@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        edit-2
        1 year ago

        Sure is. I report the ones I come across as clickbait or missleading title, explaining the parts left out…such as this one where those 7 words change the story completely.

        Whoever made that headline should feel ashamed for victimizing a grommer.

    • MeanEYE@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      1 year ago

      I’d be torn on the idea of AI generating CP, if it were only that. On one hand if it helps them calm the urges while no one is getting hurt, all the better. But on the other hand it might cause them not to seek help, but problem is already stigmatized severely enough that they are most likely not seeking help anyway.

      But sending that stuff to a minor. Big problem.

      • Madison420@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        It won’t. They’ll get them for the actual crime not the thought crime that’s been nerfed to oblivion.

    • Ricky Rigatoni@lemm.ee
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      4
      ·
      1 year ago

      You can get away with a lot of heinous crimes by simply not telling people and sharing the results.

      • quindraco@lemm.ee
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        9
        ·
        1 year ago

        You consider it a heinous crime to draw a picture and keep it to yourself?

    • Frozengyro@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      14
      ·
      1 year ago

      That’s sickening to know there are bastards out there who will get away with it since they are only creating it.

      • NeoNachtwaechter@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        5
        ·
        1 year ago

        I’m not sure. Let us assume that you generate it on your own PC at home (not using a public service) and don’t brag about it and never give it to anybody - what harm is done?

        • GBU_28@lemm.ee
          link
          fedilink
          English
          arrow-up
          8
          arrow-down
          3
          ·
          edit-2
          1 year ago

          Society is not ok with the idea of someone cranking to CSAM, then just walking around town. It gives people wolf-in-sheep-clothing vibes.

          So the notion of there being “ok” CSAM-style ai content is a non starter for a huge fraction of people because it still suggests appeasing a predator.

          I’m definitely one of those people that simply can’t accept any version of it.

        • Frozengyro@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          15
          ·
          1 year ago

          Even if the AI didn’t train itself on actual CSAM that is something that feels inherently wrong. Your mind is not right to think that’s acceptable IMO.

          • DarkThoughts@fedia.io
            link
            fedilink
            arrow-up
            8
            arrow-down
            2
            ·
            1 year ago

            Laws shouldn’t be about feelings though and we shouldn’t prosecute people for victimless thought crimes. How often did you think something violent when someone really pissed you off? Should you have been persecuted for that thought too?

              • DarkThoughts@fedia.io
                link
                fedilink
                arrow-up
                7
                arrow-down
                2
                ·
                1 year ago

                Who are the victims of someone generating such images privately then? It’s on the same level as all the various fan fiction shit that was created manually over all the past decades.

                And do we apply this to other depictions of criminalized things too? Would we ban the depiction of violence & sexual violence on TV, in books, and in video games too?

  • SeattleRain@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    ·
    edit-2
    1 year ago

    America has some of the most militant anti pedophilic culture in the world but they far and away have the highest rates of child sexual assault.

    I think AI is going to revel is how deeply hypocritical Americans are on this issue. You have gigantic institutions like churches committing industrial scale victimization yet you won’t find a 1/10th of the righteous indignation against other organized religions where there is just as much evidence it is happening as you will regarding one person producing images that don’t actually hurt anyone.

    It’s pretty clear by how staggering a rate of child abuse that occurs in the states that Americans are just using child victims as weaponized politicalization (it’s next to impossible to convincingly fight off pedo accusations if you’re being mobbed) and aren’t actually interested in fighting pedophilia.

    • Guy Ingonito@reddthat.com
      link
      fedilink
      English
      arrow-up
      23
      ·
      1 year ago

      Most states will let grown men marry children as young as 14. There is a special carve out for Christian pedophiles.

      • ricecake@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Fortunately most instances are in the category of a 17 year old to an 18 year old, and require parental consent and some manner of judicial approval, but the rates of “not that” are still much higher than one would want.
        ~300k in a 20 year window total, 74% of the older partner being 20 or younger, and 95% of the younger partner being 16 or 17, with only 14% accounting for both partners being under 18.

        There’s still no reason for it in any case, and I’m glad to live in one of the states that said "nah, never needed .

  • UnpluggedFridge@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 year ago

    These cases are interesting tests of our first amendment rights. “Real” CP requires abuse of a minor, and I think we can all agree that it should be illegal. But it gets pretty messy when we are talking about depictions of abuse.

    Currently, we do not outlaw written depictions nor drawings of child sexual abuse. In my opinion, we do not ban these things partly because they are obvious fictions. But also I think we recognize that we should not be in the business of criminalizing expression, regardless of how disgusting it is. I can imagine instances where these fictional depictions could be used in a way that is criminal, such as using them to blackmail someone. In the absence of any harm, it is difficult to justify criminalizing fictional depictions of child abuse.

    So how are AI-generated depictions different? First, they are not obvious fictions. Is this enough to cross the line into criminal behavior? I think reasonable minds could disagree. Second, is there harm from these depictions? If the AI models were trained on abusive content, then yes there is harm directly tied to the generation of these images. But what if the training data did not include any abusive content, and these images really are purely depictions of imagination? Then the discussion of harms becomes pretty vague and indirect. Will these images embolden child abusers or increase demand for “real” images of abuse. Is that enough to criminalize them, or should they be treated like other fictional depictions?

    We will have some very interesting case law around AI generated content and the limits of free speech. One could argue that the AI is not a person and has no right of free speech, so any content generated by AI could be regulated in any manner. But this argument fails to acknowledge that AI is a tool for expression, similar to pen and paper.

    A big problem with AI content is that we have become accustomed to viewing photos and videos as trusted forms of truth. As we re-learn what forms of media can be trusted as “real,” we will likely change our opinions about fringe forms of AI-generated content and where it is appropriate to regulate them.

    • Corkyskog@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      It comes back to distribution for me. If they are generating the stuff for themselves, gross, but I don’t see how it can really be illegal. But if your distributing them, how do we know their not real? The amount of investigative resources that would need to be dumped into that, and the impact on those investigators mental health, I don’t know. I really don’t have an answer, I don’t know how they make it illegal, but it really feels like distribution should be.

    • TheHarpyEagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      It feels incredibly gross to just say “generated CSAM is a-ok, grab your hog and go nuts”, but I can’t really say that it should be illegal if no child was harmed in the training of the model. The idea that it could be a gateway to real abuse comes to mind, but that’s a slippery slope that leads to “video games cause school shootings” type of logic.

      I don’t know, it’s a very tough thing to untangle. I guess I’d just want to know if someone was doing that so I could stay far, far away from them.

    • yamanii@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      partly because they are obvious fictions

      That’s it actually, all sites that allow it like danbooru, gelbooru, pixiv, etc. Have a clause against photo realistic content and they will remove it.

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Well thought-out and articulated opinion, thanks for sharing.

      If even the most skilled hyper-realistic painters were out there painting depictions of CSAM, we’d probably still label it as free speech because we “know” it to be fiction.

      When a computer rolls the dice against a model and imagines a novel composition of children’s images combined with what it knows about adult material, it does seem more difficult to label it as entirely fictional. That may be partly because the source material may have actually been real, even if the final composition is imagined. I don’t intend to suggest models trained on CSAM either, I’m thinking of models trained to know what both mature and immature body shapes look like, as well as adult content, and letting the algorithm figure out the rest.

      Nevertheless, as you brought up, nobody is harmed in this scenario, even though many people in our culture and society find this behavior and content to be repulsive.

      To a high degree, I think we can still label an individual who consumes this type of AI content to be a pedophile, and although being a pedophile is not in and of itself an illegal adjective to posses, it comes with societal consequences. Additionally, pedophilia is a DSM-5 psychiatric disorder, which could be a pathway to some sort of consequences for those who partake.

      • KillingTimeItself@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        for some reason the US seems to hold a weird position on this one. I don’t really understand it.

        It’s written to be illegal, but if you look at prosecution cases, i think there have been only a handful of charged cases. The prominent ones which also include relevant previous offenses, or worse.

        It’s also interesting when you consider that there are almost definitely large image boards hosted in the US that host what could be constituted as “cartoon CSAM” notably e621, i’d have to verify their hosting location, but i believe they’re in the US. And so far i don’t believe they’ve ever had any issues with it. And i’m sure there are other good examples as well.

        I suppose you could argue they’re exempt on the publisher rules. But these sites don’t moderate against these images, generally. And i feel like this would be the rare exception where it wouldnt be applicable.

        The law is fucking weird dude. There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense.

        • PirateJesus@lemmy.today
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          It seems to me to be a lesser charge. A net that catches a larger population and they can then go fishing for bigger fish to make the prosecutor look good. Or as I’ve heard from others, it is used to simplify prosecution. PedoAnon can’t argue “it’s a deepfake, not a real kid” to the SWAT team.

          There is a massive disconnect between what we should be seeing, and what we are seeing. I assume because the authorities who moderate this shit almost exclusively go after real CSAM, on account of it actually being a literal offense, as opposed to drawn CSAM, being a proxy offense. This can be attributed to no proper funding of CSAM enforcement. Pedos get picked up if they become an active embarrassment like the article dude. Otherwise all the money is just spent on the database getting bigger and keeping the lights on. Which works for congress. A public pedo gets nailed to the wall because of the database, the spooky spectre of the pedo out for your kids remains, vote for me please…

          • KillingTimeItself@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            It seems to me to be a lesser charge. A net that catches a larger population and they can then go fishing for bigger fish to make the prosecutor look good. Or as I’ve heard from others, it is used to simplify prosecution. PedoAnon can’t argue “it’s a deepfake, not a real kid” to the SWAT team.

            ah that could be a possibility as well. Just ensuring reasonable flexibility in prosecution so you can be sure of what you get.

  • Kedly@lemm.ee
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    6
    ·
    edit-2
    1 year ago

    Ah yes, more bait articles rising to the top of Lemmy. The guy was arrested for grooming, he was sending these images to a minor. Outside of Digg, anyone have any suggestions for an alternative to Lemmy and Reddit? Lemmy’s moderation quality is shit, I think I’m starting to figure out where I lean on the success of my experimental stay with Lemmy

    Edit: Oh god, I actually checked digg out after posting this and the site design makes it look like you’re actually scrolling through all of the ads at the bottom of a bulshit clickbait article

  • Greg Clarke@lemmy.ca
    link
    fedilink
    English
    arrow-up
    27
    arrow-down
    3
    ·
    1 year ago

    This is tough, the goal should be to reduce child abuse. It’s unknown if AI generated CP will increase or reduce child abuse. It will likely encourage some individuals to abuse actual children while for others it may satisfy their urges so they don’t abuse children. Like everything else AI, we won’t know the real impact for many years.

  • TheObviousSolution@lemm.ee
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    1
    ·
    edit-2
    1 year ago

    He then allegedly communicated with a 15-year-old boy, describing his process for creating the images, and sent him several of the AI generated images of minors through Instagram direct messages. In some of the messages, Anderegg told Instagram users that he uses Telegram to distribute AI-generated CSAM. “He actively cultivated an online community of like-minded offenders—through Instagram and Telegram—in which he could show off his obscene depictions of minors and discuss with these other offenders their shared sexual interest in children,” the court records allege. “Put differently, he used these GenAI images to attract other offenders who could normalize and validate his sexual interest in children while simultaneously fueling these offenders’ interest—and his own—in seeing minors being sexually abused.”

    I think the fact that he was promoting child sexual abuse and was communicating with children and creating communities with them to distribute the content is the most damning thing, regardless of people’s take on the matter.

    Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.

    • Maggoty@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Wait do you think all Hentai is CSAM?

      And sending the images to a 15 year old crosses the line no matter how he got the images.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Umm … That AI generated hentai on the page of the same article, though … Do the editors have any self-awareness? Reminds me of the time an admin decided the best course of action to call out CSAM was to directly link to the source.

      The image depicts mature women, not children.

      • BangCrash@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        Correct. And OP’s not saying it is.

        But to place that sort of image on an article about CSAM is very poorly thought out

  • badbytes@lemmy.world
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    4
    ·
    1 year ago

    Breaking news: Paint made illegal, cause some moron painted something stupid.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 year ago

      I’d usually agree with you, but it seems he sent them to an actual minor for “reasons”.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      1 year ago

      Some places do lock up spray paint due to its use in graffiti, so that’s not without precedent.

      • Soggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        They lock it up because it’s frequently stolen. (Because of its use in graffiti, but still.)

  • Nora@lemmy.ml
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    1 year ago

    I had an idea when these first AI image generators started gaining traction. Flood the CSAM market with AI generated images( good enough that you can’t tell them apart.) In theory this would put the actual creators of CSAM out of business, thus saving a lot of children from the trauma.

    Most people down vote the idea on their gut reaction tho.

    Looks like they might do it on their own.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      It’s such an emotional topic that people lose all rationale. I remember the Reddit arguments in the comment sections about pedos, already equalizing the term with actual child rapists, while others would argue to differentiate because the former didn’t do anything wrong and shouldn’t be stigmatized for what’s going on in their heads but rather offered help to cope with it. The replies are typically accusations of those people making excuses for actual sexual abusers.

      I always had the standpoint that I do not really care about people’s fictional content. Be it lolis, torture, gore, or whatever other weird shit. If people are busy & getting their kicks from fictional stuff then I see that as better than using actual real life material, or even getting some hands on experiences, which all would involve actual real victims.

      And I think that should be generally the goal here, no? Be it pedos, sadists, sociopaths, whatever. In the end it should be not about them, but saving potential victims. But people rather throw around accusations and become all hysterical to paint themselves sitting on their moral high horse (ironically typically also calling for things like executions or castrations).

    • Itwasthegoat@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      1 year ago

      My concern is why would it put them out of business? If we just look at legal porn there is already beyond huge amounts already created, and the market is still there for new content to be created constantly. AI porn hasn’t noticeably decreased the amount produced.

      Really flooding the market with CSAM makes it easier to consume and may end up INCREASING the amount of people trying to get CSAM. That could end up encouraging more to be produced.

      • Nora@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        The market is slightly different tho. Most CSAM is images, with Porn theres a lot of video and images.

    • jaschen@lemm.ee
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      It’s also a victimless crime. Just like flooding the market with fake rhino horns and dropping the market price to a point that it isn’t worth it.

    • deathbird@mander.xyz
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      4
      ·
      1 year ago

      It would not need to be trained on CP. It would just need to know what human bodies can look like and what sex is.

      AIs usually try not to allow certain content to be produced, but it seems people are always finding ways to work around those safeguards.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 year ago

      Likely yes, and even commercial models have an issue with CSAM leaking into their datasets. The scummiest of all of them likelyget one offline model, then add their collection of CSAM to it.

  • Ibaudia@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Isn’t there evidence that as artificial CSAM is made more available, the actual amount of abuse is reduced? I would research this but I’m at work.

  • StaySquared@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I wonder if cartoonized animals in CSAM theme is also illegal… guess I can contact my local FBI office and provide them the web addresses of such content. Let them decide what is best.

  • PirateJesus@lemmy.today
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    OMG. Every other post is saying their disgusted about the images part but it’s a grey area, but he’s definitely in trouble for contacting a minor.

    Cartoon CSAM is illegal in the United States. AI images of CSAM fall into that category. It was illegal for him to make the images in the first place BEFORE he started sending them to a minor.

    https://www.thefederalcriminalattorneys.com/possession-of-lolicon

    https://en.wikipedia.org/wiki/PROTECT_Act_of_2003

    • Madison420@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Yeah that’s toothless. They decided there is no particular way to age a cartoon, they could be from another planet that simply seem younger but are in actuality older.

      It’s bunk, let them draw or generate whatever they want, totally fictional events and people are fair game and quite honestly I’d Rather they stay active doing that then get active actually abusing children.

      Outlaw shibari and I guarantee you’d have multiple serial killers btk-ing some unlucky souls.

        • RGB3x3@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          1 year ago

          The problem with AI CSAM generation is that the AI has to be trained on something first. It has to somehow know what a naked minor looks like. And to do that, well… You need to feed it CSAM.

          So is it right to be using images of real children to train these AI? You’d be hard-pressed to find someone who thinks that’s okay.

            • PotatoKat@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              1 year ago

              The images were created using photos of real children even if said photos weren’t CSAM (which can’t be guaranteed they weren’t). So the victims were are the children used to generate CSAM

              • sugar_in_your_tea@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                edit-2
                1 year ago

                Let’s do a thought experiment, and I’d look to to tell me at what point a victim was introduced:

                1. I legally acquire pictures of a child, fully clothed and everything
                2. I draw a picture based on those legal pictures, but the subject is nude or doing sexually explicit things
                3. I keep the picture for my own personal use and don’t distribute it

                Or with AI:

                1. I legally acquire pictures of children, fully clothed and everything
                2. I legally acquire pictures of nude adults, some doing sexually explicit things
                3. I train an AI on a mix of 1&2
                4. I generate images of nude children, some of them doing sexually explicit things
                5. I keep the pictures for my own personal use and don’t distribute any of them
                6. I distribute my model, using the right to distribute from the legal acquisition of those images

                At what point did my actions victimize someone?

                If I distributed those images and those images resemble a real person, then that real person is potentially a victim.

                I will say someone who does this creepy and I don’t want them anywhere near children (especially mine, and yes, I have kids), but I don’t think it should be illegal, provided the source material is legal. But as soon as I distribute it, there absolutely could be a victim. Being creepy shouldn’t be a crime.

                • PotatoKat@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  1 year ago

                  I think it should be illegal to make porn of a person without their permission regardless of if it was shared or not. Imagine the person it is based off of finds out someone is doing that. That causes mental strain on the person. Just like how revenge porn doesn’t actively harm a person but causes mental strafe (both the initial upload and continued use of it). For scenario 1 it would be at step 2 when the porn is made of the person. For scenario 2 it would be a mix between step 3 and 4.

          • Eezyville@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            You make the assumption that the person generating the images also trained the AI model. You also make assumptions about how the AI was trained without knowing anything about the model.

            • RGB3x3@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              Are there any guarantees that harmful images weren’t used in these AI models? Based on how image generation works now, it’s very likely that harmful images were used to train the data.

              And if a person is using a model based on harmful training data, they should be held responsible.

              However, the AI owner/trainer has even more responsibility in perpetuating harm to children and should be prosecuted appropriately.

              • Eezyville@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                And if a person is using a model based on harmful training data, they should be held responsible.

                I will have to disagree with you for several reasons.

                • You are still making assumptions about a system you know absolutely nothing about.
                • By your logic anything born from something that caused suffering from others (this example is AI trained on CSAM) the users of that product should be held responsible for the crime committed to create that product.
                  • Does that apply to every product/result created from human suffering or just the things you don’t like?
                  • Will you apply that logic to the prosperity of Western Nations built on the suffering of indigenous and enslaved people? Should everyone who benefit from western prosperity be held responsible for the crimes committed against those people?
                  • What about medicine? Two examples are The Tuskegee Syphilis Study and the cancer cells of Henrietta Lacks. Medicine benefited greatly from these two examples but crimes were committed against the people involved. Should every patient from a cancer program that benefited from Ms. Lacks’ cancer cells also be subject to pay compensation to her family? The doctors that used her cells without permission didn’t.
                  • Should we also talk about the advances in medicine found by Nazis who experimented on Jews and others during WW2? We used that data in our manned space program paving the way to all the benefits we get from space technology.
      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        My main issue with generation is the ability of making it close enough to reality. Even with the more realistic art stuff, some outright referenced or even traced CSAM. The other issue is the lack of easy differentiation between reality and fiction, and it muddies the water. “I swear officer, I thought it was AI” would become the new “I swear officer, she said she was 18”.

        • Madison420@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 year ago

          That is not an end user issue, that’s a dev issue. Can’t train on scam if it isn’t available and as such is tacit admission of actual possession.

      • MDKAOD@lemmy.ml
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I think the challenge with Generative AI CSAM is the question of where did training data originate? There has to be some questionable data there.

        • erwan@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          There is also the issue of determining if a given image is real or AI. If AI were legal, that means prosecution would need to prove images are real and not AI with the risk of letting go real offenders.

          The need to ban AI CSAM is even clearer than cartoon CSAM.

          • Madison420@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            1 year ago

            And in the process force non abusers to seek their thrill with actual abuse, good job I’m sure the next generation of children will appreciate your prudish factually inept effort. We’ve tried this with so much shit, prohibition doesn’t stop anything or just creates a black market and a abusive power system to go with it.

  • Deceptichum@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    What an oddly written article.

    Additional evidence from the laptop indicates that he used extremely specific and explicit prompts to create these images. He likewise used specific ‘negative’ prompts—that is, prompts that direct the GenAI model on what not to include in generated content—to avoid creating images that depict adults.”

    They make it sound like the prompts are important and/or more important than the 13,000 images…

    • ricecake@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      In many ways they are. The image generated from a prompt isn’t unique, and is actually semi random. It’s not entirely in the users control. The person could argue “I described what I like but I wasn’t asking it for children, and I didn’t think they were fake images of children” and based purely on the image it could be difficult to argue that the image is not only “child-like” but actually depicts a child.

      The prompt, however, very directly shows what the user was asking for in unambiguous terms, and the negative prompt removes any doubt that they thought they were getting depictions of adults.