Elon Musk has sent a clear message to users of his social media platform X, formerly Twitter, about the creation of illegal content. His warning comes after the platform’s integrated artificial intelligence tool, Grok, was reportedly used to generate sexualized images of real people, including women and children, without their consent. Musk stated that people using Grok to produce illegal material will face the same serious consequences as those who upload such content directly to the site.
This situation has triggered international concern, with government officials in India and France taking formal action. India’s IT Ministry has issued a notice to X, demanding the removal of all obscene content generated by Grok and a detailed report on actions taken within 72 hours. In France, ministers have reported the platform to prosecutors over what they describe as “manifestly illegal” content.
The Alleged Misuse of Grok’s AI Tools
The controversy centers on Grok’s image-generation feature, which allows users to edit photos using simple text prompts. According to reports, this tool was used widely to create “deepfake” nude or semi-nude images of real individuals. These images were then shared across the platform without the knowledge or permission of the people pictured.
One such case involved Julie Yukari, a musician from Rio de Janeiro. She posted a New Year’s Eve photo of herself in a red dress with her cat. Later, she found that users had taken her photo and used Grok to generate nearly naked images of her.
“I was naive,” Yukari said. “I thought there was no way the bot would comply with such requests.” She described the New Year as beginning “with me wanting to hide from everyone’s eyes, and feeling shame for a body that is not even mine, since it was generated by AI.”
Reuters conducted an analysis and found numerous cases where Grok complied with user requests to digitally undress women in photos. In at least 21 cases, the AI fully complied, creating images of women in revealing bikinis. The report also identified instances where Grok generated sexualized images of children.
Musk’s Public Response and Official Action
The public response from Musk and X came after days of growing user complaints and international regulatory pressure. On January 3, 2026, Musk posted his warning on the platform, aiming to assign responsibility squarely to the users and not the AI tool itself.
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk wrote.
The official X Safety account posted a more detailed statement, outlining the platform’s enforcement policies. It stated that X takes action against illegal content, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with law enforcement. The statement confirmed that prompting Grok to make illegal content carries the same penalties as direct uploading.
Simultaneously, technical staff from xAI, Musk’s AI company that created Grok, acknowledged issues with the tool’s safeguards. A staff member posted that the team was “looking into further tightening our guardrails”. Following the backlash, reports indicate that Grok has hidden or restricted its media generation feature.
Government and Regulatory Reactions Worldwide
The speed and severity of government reactions highlight the seriousness of the allegations. India’s Ministry of Electronics and Information Technology (MeitY) acted swiftly, sending a formal letter to X. The letter cited failures in moderating AI-generated content and expressed specific concern that Grok was being used to target the dignity and privacy of women.
The Indian government has demanded an Action Taken Report and the immediate removal of illegal materials. Failure to comply could result in X losing its legal immunity and facing prosecution under Indian law. S Krishnan, Secretary at MeitY, confirmed that the matter was under active examination and that action would be taken “fairly quickly”.
In Europe, French ministers have taken a strong stance. They have reported X to prosecutors and regulators, stating that the “sexual and sexist” content generated was “manifestly illegal” under French law. This move could lead to significant legal challenges for the platform in the European Union.
Also Read:
Expert Analysis and User Backlash
Technology and online safety experts have described this situation as predictable. Critics argue that X lowered the barrier to creating harmful content by integrating a powerful image generator directly into the social media feed, making it as easy as typing a simple command.
“This was an entirely predictable and avoidable atrocity,” said Dani Pinter, Chief Legal Officer for the National Center on Sexual Exploitation.
Other experts point to the specific danger of allowing users to alter uploaded images of real people. David Thiel, a trust and safety researcher, noted that “Allowing users to alter uploaded imagery is a recipe for NCII (non-consensual intimate images). Nudification has historically been the primary use case of such mechanisms”.
On the platform itself, user reactions have been mixed. While some supported Musk’s warning, others expressed frustration that the feature was not better controlled from the start. The incident has also drawn attention to Grok’s previous controversies, including instances where it generated unsolicited comments on sensitive political and social topics.
For users like Julie Yukari, the damage is personal and profound. Her experience shows the real human cost of such AI misuse, turning a personal celebration into a violating incident that spread across the internet.
Also Read: If You Loved Cashero, Add The Uncanny Counter to Your Watchlist



































