AI Deepfakes Spark Outrage as Women Face Digital Abuse
A single prompt such as “bikini pic” or “remove the dress” is enough to drive women away from posting on social media altogether.

MANVI VYAS AND PRATHYUSH NALLELLA | DC
HYDERABAD, JAN. 3
A regular photograph shared online can now be turned into sexually explicit content without consent — imagery that, once published, is often irreversible and difficult to trace. As generative artificial intelligence tools become increasingly accessible, a disturbing form of digital abuse has emerged: The non-consensual creation of sexually explicit images of women using AI platforms.
A single prompt such as “bikini pic” or “remove the dress” is enough to drive women away from posting on social media altogether.
The latest flashpoint involves Grok, the AI integrated into Elon Musk’s social media platform X. Under innocuous posts of women sharing photographs, men have openly tagged Grok, urging it to make sexually suggestive alterations. While Grok itself is bound by moderation rules, the behaviour illustrates how AI is being weaponised to legitimise digital sexual harassment in broad daylight.
Women report feeling violated and unsafe as their images become fodder for lewd fantasies disguised as technological curiosity. The platform’s slow and inconsistent moderation has compounded outrage, allowing abusive comments to remain visible long enough to normalise the practice.
Recently, Shiv Sena (UBT) MP Priyanka Chaturvedi wrote to the Ministry of Electronics and Information Technology (MEITy), demanding safeguards.
“It is not just limited to sharing photos through fake accounts but is also targeting women who post their own photos. This is unacceptable and a gross misuse of an AI function,” she said.
On December 2, the central government issued notices to X after several instances of Grok publicly altering images of clothed women into sexually explicit attire. Shockingly, minors’ pictures were not spared.
In Hyderabad, a 53-year-old security guard was arrested by Gachibowli police after he secretly photographed women in a PG hostel and uploaded them to Gemini to generate fake bikini images.
But the problem extends beyond X or Grok. Telegram hosts a thriving underground economy for AI-generated morphing. Hundreds of bots openly advertise “clothing removal” features for photos and videos, often showcasing sample outputs in public channels. The anonymity of Telegram has enabled this ecosystem to flourish, with little moderation on age or consent — raising grave concerns about exploitation of minors.
Even more alarming is the rise of locally run AI models. Unlike cloud-based systems, these tools operate entirely on personal computers. Once installed, they allow individuals to generate manipulated imagery offline, beyond the reach of platform moderation or law enforcement. Open-source repositories and forums provide step-by-step guides, enabling even novices to deploy such systems.
Mainstream AI platforms are not immune. While companies like OpenAI and Google maintain strict adult-content filters, users have found ways to circumvent them. Instead of requesting explicit imagery directly, prompts are disguised as “change outfits”, “replace clothing with swimwear”, or “reimagine attire”. Though seemingly benign, these prompts can be refined to push models towards sexually suggestive outputs, undermining safety rules.
Cyber law expert Rupesh Mittal said such acts fall under the broader category of deepfakes, even though the term is not explicitly mentioned in Indian statutes.
“Deepfake mainly means any fake image or media generated using deep AI or deep learning technology. In this case, the image of a real woman is taken and a fake sexual context is created without her consent. That makes it both a deepfake and image-based sexual abuse,” Mittal told Deccan Chronicle.
He explained that AI-generated images are far more dangerous than earlier abuses using manual photo-editing tools.
“With AI, realism is much higher. These tools can generate images, audio and even videos that are difficult to distinguish from reality,” he said.
Mittal pointed to existing legal provisions:
“Section 66E of the IT Act deals with violation of privacy and provides for up to five years’ imprisonment and a fine of ₹3 lakh. Sections 67 and 67A deal with publishing or transmitting obscene and sexually explicit content. For women, the Indecent Representation of Women (Prohibition) Act, 1986 is also applicable.”
A senior officer from the Telangana Cybersecurity Bureau (TGCSB), speaking anonymously, said most such cases are booked under voyeurism and IT Act provisions.
“These offences usually carry punishment of less than seven years. Courts often do not allow remand in such cases, and that becomes a loophole,” the officer said, adding that offenders operating anonymously online are rarely traced.
The current legal framework relies heavily on victim reporting.
“Under the IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, platforms are responsible to act on complaints immediately, based on their severity. If they fail, victims can escalate the matter to the Grievance Appellate Committee (GAC), a government-run appeals body,” the officer said. He noted, however, that most platforms lack a fast-track mechanism for reporting non-consensual intimate images.
Mittal argued that platforms in India must comply with Indian law.
“If an intermediary is not compliant with the IT Rules, the government has the power to block or ban it. This has already been done with Chinese apps,” he said.
Meanwhile, the Internet Freedom Foundation (IFF), in its submission on the Draft Synthetic Information (Regulation) Rules, 2025, warned of unintended consequences.
The draft mandates intermediaries to identify, label and watermark synthetically generated content, with 10% of the content’s space reserved for watermarking.
While acknowledging the harm caused by deepfakes, IFF cautioned that the rules could enable excessive censorship and surveillance if implemented without precise definitions. It argued that obligating platforms to broadly scan “synthetic information” could harm free expression and legitimate AI use.
IFF has called for withdrawal of the draft rules in their current form, warning of “over-regulation without proportionality” and urging narrowly tailored measures focused on demonstrable harm.
“Any proactive vetting of AI tools by the government will face scrutiny under Article 19(1)(a), which guarantees freedom of speech and expression,” Mittal added.
He believes licensing would help. “We need a licensing or compliance-based framework that may be better than blanket bans. If a company wants to offer AI services here, it must follow Indian rules. Non-compliance should have consequences,” he said.
Mittal noted that countries such as the UK, Australia, Switzerland and South Korea have stronger frameworks.
“The UK’s GDPR-linked obligations, for instance, compel platforms to act quickly on harmful content,” he said.
He added that India’s Digital Personal Data Protection Act could strengthen protections once implemented, though the rules are yet to be notified.

