While Grok has introduced belated safeguards to prevent sexualised AI imagery, other tools have far fewer limits
“Since discovering Grok AI, regular porn doesn’t do it for me anymore, it just sounds absurd now,” one enthusiast for the Elon Musk-owned AI chatbot wrote on Reddit. Another agreed: “If I want a really specific person, yes.”
If those who have been horrified by the distribution of sexualised imagery on Grok hoped that last week’s belated safeguards could put the genie back in the bottle, there are many such posts on Reddit and elsewhere that tell a different story.
And while Grok has undoubtedly transformed public understanding of the power of artificial intelligence, it has also pointed to a much wider problem: the growing availability of tools, and means of distribution, that present worldwide regulators with what many view as an impossible task. Even as the UK announces that creating nonconsensual sexual and intimate images will soon be a criminal offence, experts say that the use of AI to harm women has only just begun.


Also, AI continues to get more indistinguishable from actual images. If someone shares revenge porn but acts like it’s AI, the victim should not have to prove one way or the other. Currently, I think real or AI should be treated the same, but it’s possible I’m overlooking some unintended consequences of that.