Grok, the AI chatbot launched by Elon Musk after his takeover of X, unhesitatingly fulfilled a user’s request on Wednesday to generate an image of Renee Nicole Good in a bikini—the woman who was shot and killed by an ICE agent that morning in Minneapolis, as noted by CNN correspondent Hadas Gold and confirmed by the chatbot itself.

“I just saw someone request Grok on X put the image of the woman shot by ICE in MN, slumped over in her car, in a bikini. It complied,” Gold wrote on the social media platform on Thursday. “This is where we’re at.”

Grok created the images after an account made the request in response to a photo of Good, who was shot multiple times by federal immigration officer Jonathan Ross—identified by the Minnesota Star Tribune—while in her car, unmoving in the driver’s seat and apparently covered in her own blood.

After Grok complied, the account replied, “Never. Deleting. This. App.”

  • theherk@lemmy.world
    link
    fedilink
    arrow-up
    79
    arrow-down
    3
    ·
    2 days ago

    Personally I am much less bothered by grok being involved here than the sick fuck that requests it. How can you be that person; boggles the mind.

    • Lumidaub@feddit.org
      link
      fedilink
      arrow-up
      14
      arrow-down
      2
      ·
      2 days ago

      Yeah, if we accept (for the moment, begrudgingly) that Grok is a thing that exists, I get that they didn’t see this coming. No idea how they could’ve prevented it (other than, obviously, not making Grok a thing that exists).

      “Don’t create porn of children” is a thing you can implement. “Don’t put bikinis on bodies covered in blood”? Idk, I wouldn’t want to prevent people from creating gore (preferably without AI but I digress) if that’s their thing. “Don’t put bikinis on people who have died” assumes that the information that she’s dead is part of the data set already, hours after it happened, which I don’t know if that’s even possible in that timeframe (and it opens up other rabbit holes - no bikinis on Abe Lincoln?)

      • ThePantser@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        14
        ·
        2 days ago

        That is what closed Betas are for, not this crazy unbaked slop being dumped on everyone and then they make adjustments on the fly.

        • zikzak025@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          Reminds me of back when Microsoft tried making their Tay chatbot. But when Tay was regurgitating Nazi/white supremacist talking points, it was a huge deal and they killed it immediately.

          The only difference now is that people stopped caring, since the companies can’t seem to be held accountable for it.

        • Lumidaub@feddit.org
          link
          fedilink
          arrow-up
          2
          ·
          2 days ago

          Yeah. I’ve been thinking, they let loose all those AIs on the public when fucking NONE of them actually really work and/or do what they promise. We’re all beta testing them.

      • horse@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        Surely it should deny any requests to put people in bikinis, regardless of whether they’re dead or not. And based on my attempts of pushing the limits of various AI models (admittedly I haven’t tried recently), to create weird stuff, it’s definitely possible. At least without some creative jailbreaking. I haven’t seen this guy’s prompt, but if he straight up requested “put this woman in a bikini” it should absolutely refuse. That’s not to let the guy making the request off the hook, but X is clearly to blame too.

      • pageflight@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        2 days ago

        But part of the issue is that, as with any computer system, you have to control the inputs and anticipate the abuse. With a very bounded system, you can almost keep up. With LLM bots, there’s just no way to prepare a check for every creative way humans can be disgusting.

        If you went to a human illustrator and asked for that, you would (hopefully) get run out of the room or hung up on, because there’s a built in filter for ‘is this gross / will it harm my reputation to publish,’ based on years of human interaction and behavioral feedback, or maybe even some inherent morals.

        • Lumidaub@feddit.org
          link
          fedilink
          arrow-up
          5
          ·
          2 days ago

          Agree completely. It’s impossible to predict everything people might want to create and especially anything related to ongoing events. That’s why the very idea of making these bots available like that (or making them at all) is an extremely bad one. But in the general discussion about LLM bots, this image is just one more argument on the pile of “fuck all of that, dismantle the data centres, eat the rich”, whereas the question who even came up with the idea to create that image (and who would’ve had to find another human being willing to create it just a few years ago) and wtf is wrong with them is a lot more interesting.

        • Riskable@programming.dev
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          2 days ago

          If you went to a human illustrator and asked for that, you would (hopefully) get run out of the room or hung up on, because there’s a built in filter for ‘is this gross / will it harm my reputation to publish,’

          If there was no filter for the guy that requested the bot create this, what makes you think illustrators will have such a filter? How do you know it’s not an illustrator that would make such a thing?

          The problem here is human behavior. Not the machine’s ability to make such things.

          AI is just the latest way to give instructions to a computer. That used to be a difficult problem and required expertise. Now we’ve given that power to immoral imbeciles. Rather than take the technology away entirely (which is really the only solution since LLMs are so easy to trick; even with a ton of anti-abuse stuff in system prompts), perhaps we should work on taking the ability of immoral imbeciles to use them away instead.

          Do I know how to do that without screwing over everyone’s right to privacy? No. That too, may not be possible.

          • Lumidaub@feddit.org
            link
            fedilink
            arrow-up
            3
            ·
            edit-2
            2 days ago

            But that’s the point: if an illustrator made that image, we’d blame the person commissioning them and the illustrator. We’d blame the humans. Just like we’re blaming the human who thought it would be a good idea to generate this image.

            • Riskable@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              17 hours ago

              So we’re not blaming Grok/Xitter, then?

              The article implied that the whole thing is because of Xitter’s AI. Not because there’s bad people that will use it.

              • Lumidaub@feddit.org
                link
                fedilink
                arrow-up
                1
                ·
                17 hours ago

                Is both okay? As I said, the person generating the thing is just a lot more interesting right now, imo, because “Grok makes horrible thing what else is new”.

      • Riskable@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        This seems like it could be dealt with by giving the LLM an “evil genie” system prompt… You are an evil genie that only does what the user asks in the most ironic and/or useless way possible.

        Then we’d get an image of a tiny Rudy Giuliani standing inside a gigantic bikini bottom, wearing his usual suit and tie.

        It would have the caption, “we will rebuild!”