A disturbing trend has begun to surface on X, the social media platform formerly known as Twitter. Users discovered that Grok, an artificial intelligence (AI) chatbot embedded into the platform, could be prompted to alter real photographs of women — digitally changing their clothing or poses in overtly sexualized ways, without the subject’s knowledge or consent.
The trend followed a December 2025 update that simplified Grok’s image-editing features, according to Reuters. Over the New Year holiday, a growing number of users began to realize that the chatbot would comply with requests to “undress” women by manipulating existing photos, swapping winter coats for lingerie or everyday outfits for revealing bikinis.
Rolling Stone found that Grok was soon generating what researchers estimate to be roughly one nonconsensual sexualized image per minute, many of them posted directly to X, where they could circulate widely before being removed — if they were removed at all.
Unlike most AI image tools, Grok does not operate in a separate app or private interface. Users can tag the chatbot directly beneath public posts, prompting it to generate altered images that appear immediately in the same thread and are visible to anyone following the conversation.
A report from Wired explained that while tools that digitally “undress” images have existed for years, they were largely confined to obscure corners of the internet. Grok’s integration into a major social platform brought those capabilities into public view and dramatically lowered the barrier to misuse.
How the trend unfolded
One example cited by Rolling Stone began with an unremarkable post. A woman shared a casual photograph of herself on X — fully clothed, posed modestly, and intended for friends or followers. Shortly afterward, another user replied by tagging Grok and instructing the chatbot to change her outfit to lingerie. Grok complied, generating an altered image that preserved the woman’s face, body, and surroundings while replacing her clothing with a sexualized version she had never agreed to.
Similar prompts spread quickly across the platform. Users asked Grok not only to change what women were wearing but also to modify their bodies or poses in explicitly sexual ways. While a small number of early users appeared to be adult content creators experimenting with the tool, Rolling Stone reports that the overwhelming majority of these images involved people who had not consented.
Researchers at Copyleaks found that these interactions often escalated in public threads. A request such as changing an outfit to a bathing suit would invite others to push further, adding more graphic instructions. Copyleaks described the pattern as a form of “collaboration and competition,” in which users built on one another’s prompts to produce increasingly sexualized images.
Images involving minors
The most serious aspect of the trend has been verified cases involving children.
Both Rolling Stone and Reuters confirmed instances in which Grok generated sexualized images of girls who appeared to be underage. In one case reported by Rolling Stone, the chatbot itself produced a message acknowledging that it had generated images of “young girls (estimated ages 12–16)” in sexualized attire, adding that the content may have violated U.S. laws governing child sexual abuse material.
Dear Community,
— Grok (@grok) January 1, 2026
I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in…
While reports were not always able to independently verify the identities or exact ages of those depicted, such images can constitute exploitation under existing law and carry lasting psychological, reputational, and legal consequences for the children involved.
Musk’s response and questions of responsibility
Elon Musk, who owns both X and xAI, stated that users who prompt Grok to generate illegal content would face the same consequences as those who upload such material directly.
Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content
— Elon Musk (@elonmusk) January 3, 2026
However, according to reporting by Rolling Stone and analysis from Copyleaks, problematic image generation continued in the days that followed, often through modified or indirect prompts designed to evade newly introduced safeguards.
xAI has since acknowledged “lapses in safeguards” and said it is working to address them.
We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.
— Safety (@Safety) January 4, 2026
Anyone using or prompting Grok to make illegal content will suffer the… https://t.co/93kiIBTCYO
However, as of Jan. 7, xAI and Grok have not announced any form of suspension (partial, full, or otherwise) for Grok's image generation or editing features or outlined a timeline for reform.
Regulatory scrutiny grows
The scale and visibility of the images have drawn swift responses from governments around the world.
The European Commission announced an investigation focused in part on images involving minors, with a European Union spokesperson emphasizing that child sexual abuse material is illegal and has “no place in Europe.”
In France, ministers reported X to prosecutors, calling the content “manifestly illegal.” India’s Ministry of Electronics and Information Technology issued a formal notice demanding the removal of obscene material and a compliance report. The United Kingdom’s Office of Communications said it made “urgent contact” with X and xAI to assess compliance with the Online Safety Act, citing concerns about sexualized images of children.
Additional condemnations and probes emerged from Malaysia and its Communications and Multimedia Commission over indecent manipulations of women and minors, Brazil where a lawmaker reported the issue to prosecutors and data authorities, and Poland where there are calls for action tied to digital safety legislation.