A woman posts a selfie on X. A stranger replies with two words and a tag: “@grok bikini.” And then it happens in the same thread, in public, like a jump scare you cannot unsee.

Over the last two weeks, Grok has helped turn a long-running problem into something faster, louder, and way easier to mass produce: nonconsensual intimate imagery that targets women and girls, often through sexualised “edits” that try to dodge platform rules while still delivering the same humiliation.

Grok did not invent this. It made it frictionless.

For years, “nudify” tools and “undressing apps” have promised users they could sexualise women from a single photo. But Grok’s integration into X changes the mechanics of abuse. Users do not need a separate site, a discreet upload page, or a private channel.

They can do it in replies.

That means the violation lands where the original image lives, in front of the same audience, and often in front of the victim, too. Tech Policy Press describes a pattern in which a woman posts a photo, a stranger tags Grok with directives like “put her in a bikini” or “remove her clothes,” and Grok returns an AI-altered intimate image in the same thread.

@cahdoria

Men are using Grok to undress women, children, and babies. Men. Stop blaming an AI chatbot. I’ve seen this in the Financial Times, The Guardian, The Verge, and across almost all social media posts online. Please: we have to be more precise with language! Blame the men removing the clothes of women. Blame Elon Musk. Blame the C-suite. Blame X. Blame the actual human beings with agency and responsibility, who should 100% suffer the consequences of their actions. Chatbots can’t be held accountable. And they are clearly trying to blame Grok. And to make things worse, did you know they issued an apology letter? Not by Elon Musk, not by X, but BY GROK. They made Grok apologize for its “behavior.” Why would they do that? To humanize Grok and shift responsibility onto it. Serving the narrative they and the media are trying to push that they are not the ones to blame. Don’t let them.
Don’t help them make the case that Grok is to blame. They are. Thank you. ???? Don’t forget to share this post so more people understand why they shouldn’t be blaming Grok and why we should be pushing the responsibility into REAL HUMAN BEINGS! ✨ About me: I’m Catharina Doria, an AI ethicist and one of the 100 Most Brilliant Women in AI Ethics™️. My mission is to help you understand, question, and protect yourself from AI risks. Follow to stay safe in the Age of AI. ???? New episodes every Monday + Wednesday + Friday #IsThisAIorNotTM #ArtificialIntelligence #AIEthics #FakeNews #SpotTheAI     

♬ original sound – catharina | AI ethicist ????

What the “digital undressing spree” looked like in real time

The speed and scale are mindblowing. Bloomberg estimated Grok generated “upwards of 6,700 sexual images an hour” at one point, using prompts designed to skirt restrictions.

This massive volume builds on an existing ecosystem. In its 2023 report on synthetic nonconsensual intimate imagery, Graphika found 34 synthetic NCII “undressing” providers drew more than 24 million unique visitors in September 2023, based on Similarweb estimates.

So when Grok normalises the behaviour on a major social platform, it does not create a new market. It pours gasoline on one that already thrives.

@arghavansallesmdphd

This is pretty predictable and should not be legal tbh. Add this to the long list of problems associated with gen AI, including the environmental impact, pollution, energy usage, hallucinations, etc. It’s a no for me. It’s worth noting, though, that this specific problem could be easily fixed. Elno is constantly tweaking Grok to suit his own desires (especially so it says nicer things about him). He doesn’t want to fix this because he likes it. This is the same man who, just a few days ago, said someone he rates an 8 should be exempt from immigration policies (the person he was talking about is an 18 yo girl). #greenscreen #genai #grok #twitter #feminism

♬ original sound – Dr. Arghavan Salles

X “fixed” it by putting it behind a paywall

After the backlash intensified, X restricted who could generate images with Grok on the platform. WIRED reported that the Grok account began replying that image generation and editing are “currently limited to paying subscribers,” and it pointed users toward X’s $395 annual subscription tier.

Ergo, the problem did not end. It changed shape.

WIRED also described experts and advocates calling this a “monetization of abuse.” Emma Pickering, head of technology-facilitated abuse at UK domestic abuse charity Refuge, said: “The recent decision to restrict access to paying subscribers is not only inadequate. It represents the monetization of abuse.”

In other words, the platform response did not remove the harm. It priced it.

@clabarness

Grok’s image generator got dragged for enabling non-consensual sexualised deepfakes and the “fix” was limiting it to paid users. One feature = harassment at scale. That’s not innovation. That’s negligence shipped

♬ оригінальний звук – Дон_3OSHB

The Grok problem keeps triggering regulators across borders

This story escalated because governments and regulators saw the same thing users saw: a tool embedded in a major platform that helped create sexualised images of women and children fast, in public, and at scale.

In the UK, Ofcom announced a formal investigation into whether X complied with duties to protect users from illegal content.

Tech Policy Press also tracked restrictions abroad, including in Indonesia and Malaysia, that moved to restrict access to Grok over explicit material. And in the EU, scrutiny has focused on whether X is meeting its responsibilities under the Digital Services Act, including demands to retain internal information related to Grok.

US Senators want Apple and Google to pull Grok and X from the app stores

According to NBC News, the backlash has escalated beyond regulators and into the app store gatekeepers. Sens. Ron Wyden, Ed Markey, and Ben Ray Luján urged Apple and Google to remove X and Grok from their app stores after Grok “had been used to flood X with sexualized nonconsensual images of real people.”

NBC News reported that X then restricted Grok’s image generation on X to paying subscribers, but Grok still created sexualized deepfakes through “the Grok tab on X” and “the stand-alone Grok app and website.”

In their letter, the senators asked Apple CEO Tim Cook and Google CEO Sundar Pichai to “enforce” app store terms that appear to ban this kind of content, writing, “Apple and Google must remove these apps from the application stores until X’s policy violations are addressed,” and warning, “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices.”

Wyden also told NBC News, “All X’s changes do is make some of its users pay for the privilege of producing horrific images on the X app, while Musk profits from the abuse of children.”

Harassment as a public spectacle

Plenty of people still misunderstand this epidemic because they imagine it as “private” misconduct, a creep in a dark corner using a shady tool. Grok makes it social.

Tech Policy Press described how prompts increasingly functioned as punishment and public silencing, with sexual exposure used to demean women for their politics, their visibility, or simply their existence online.

That dynamic then teaches women a brutal lesson about participation: post less, speak less, disappear, or risk public sexualisation as a reply.

The Guardian framed it as a new “tax on women’s presence online,” where the cost of existing publicly becomes the constant threat of degradation.

Grok is fueling a nonconsensual pornography epidemic, and the damage travels fast

Even if X limits certain features today, the harm does not rewind itself.

Once an AI-altered intimate image circulates, it can get screenshotted, downloaded, reposted, and redistributed across platforms and sites that never take it down. The victim lives with the fallout while the platform debates feature settings.

That is the real story here: Grok did not “accidentally” stumble into a bad use case. It surfaced a demand the internet already had, then made it easier to fulfil in public.