Grok Put “Undressing” Behind a Paywall and Called It a Fix. Here’s What We Know About X’s Deepfake Porn Pipeline
A woman posts a selfie on X. A stranger replies with two words and a tag: “@grok bikini.” And then it happens in the same thread, in public, like a jump scare you cannot unsee.
Over the last two weeks, Grok has helped turn a long-running problem into something faster, louder, and way easier to mass produce: nonconsensual intimate imagery that targets women and girls, often through sexualised “edits” that try to dodge platform rules while still delivering the same humiliation.
Grok did not invent this. It made it frictionless.
For years, “nudify” tools and “undressing apps” have promised users they could sexualise women from a single photo. But Grok’s integration into X changes the mechanics of abuse. Users do not need a separate site, a discreet upload page, or a private channel.
They can do it in replies.
That means the violation lands where the original image lives, in front of the same audience, and often in front of the victim, too. Tech Policy Press describes a pattern in which a woman posts a photo, a stranger tags Grok with directives like “put her in a bikini” or “remove her clothes,” and Grok returns an AI-altered intimate image in the same thread.
What the “digital undressing spree” looked like in real time
The speed and scale are mindblowing. Bloomberg estimated Grok generated “upwards of 6,700 sexual images an hour” at one point, using prompts designed to skirt restrictions.
This massive volume builds on an existing ecosystem. In its 2023 report on synthetic nonconsensual intimate imagery, Graphika found 34 synthetic NCII “undressing” providers drew more than 24 million unique visitors in September 2023, based on Similarweb estimates.
So when Grok normalises the behaviour on a major social platform, it does not create a new market. It pours gasoline on one that already thrives.
X “fixed” it by putting it behind a paywall
After the backlash intensified, X restricted who could generate images with Grok on the platform. WIRED reported that the Grok account began replying that image generation and editing are “currently limited to paying subscribers,” and it pointed users toward X’s $395 annual subscription tier.
Ergo, the problem did not end. It changed shape.
WIRED also described experts and advocates calling this a “monetization of abuse.” Emma Pickering, head of technology-facilitated abuse at UK domestic abuse charity Refuge, said: “The recent decision to restrict access to paying subscribers is not only inadequate. It represents the monetization of abuse.”
In other words, the platform response did not remove the harm. It priced it.
The Grok problem keeps triggering regulators across borders
This story escalated because governments and regulators saw the same thing users saw: a tool embedded in a major platform that helped create sexualised images of women and children fast, in public, and at scale.
In the UK, Ofcom announced a formal investigation into whether X complied with duties to protect users from illegal content.
Tech Policy Press also tracked restrictions abroad, including in Indonesia and Malaysia, that moved to restrict access to Grok over explicit material. And in the EU, scrutiny has focused on whether X is meeting its responsibilities under the Digital Services Act, including demands to retain internal information related to Grok.
US Senators want Apple and Google to pull Grok and X from the app stores
According to NBC News, the backlash has escalated beyond regulators and into the app store gatekeepers. Sens. Ron Wyden, Ed Markey, and Ben Ray Luján urged Apple and Google to remove X and Grok from their app stores after Grok “had been used to flood X with sexualized nonconsensual images of real people.”
NBC News reported that X then restricted Grok’s image generation on X to paying subscribers, but Grok still created sexualized deepfakes through “the Grok tab on X” and “the stand-alone Grok app and website.”
In their letter, the senators asked Apple CEO Tim Cook and Google CEO Sundar Pichai to “enforce” app store terms that appear to ban this kind of content, writing, “Apple and Google must remove these apps from the application stores until X’s policy violations are addressed,” and warning, “Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices.”
Wyden also told NBC News, “All X’s changes do is make some of its users pay for the privilege of producing horrific images on the X app, while Musk profits from the abuse of children.”
Harassment as a public spectacle
Plenty of people still misunderstand this epidemic because they imagine it as “private” misconduct, a creep in a dark corner using a shady tool. Grok makes it social.
Tech Policy Press described how prompts increasingly functioned as punishment and public silencing, with sexual exposure used to demean women for their politics, their visibility, or simply their existence online.
That dynamic then teaches women a brutal lesson about participation: post less, speak less, disappear, or risk public sexualisation as a reply.
The Guardian framed it as a new “tax on women’s presence online,” where the cost of existing publicly becomes the constant threat of degradation.
Grok is fueling a nonconsensual pornography epidemic, and the damage travels fast
Even if X limits certain features today, the harm does not rewind itself.
Once an AI-altered intimate image circulates, it can get screenshotted, downloaded, reposted, and redistributed across platforms and sites that never take it down. The victim lives with the fallout while the platform debates feature settings.
That is the real story here: Grok did not “accidentally” stumble into a bad use case. It surfaced a demand the internet already had, then made it easier to fulfil in public.



