Govt Tightens Social Media Rules on AI Content; Mandates 3hr Takedown Timeline
The amended IT rules place the onus on both social media platforms as well as AI tools.
New Delhi: The government on Tuesday mandated the takedown of unlawful online content within three hours, and requiring clear labelling of all AI-generated and synthetic content, tightening rules for social media platforms. The rules will come into force on February 20.
The rules came in response to the growing misuse of artificial intelligence (AI) to create and circulate obscene, deceptive and fake content on social media platforms — require embedding of permanent metadata or identifier with AI content and ban content considered illegal in the eyes of law, as well as shorten user grievance redressal timelines.
The response timelines have been reduced to two hours for platforms to take down flagged content involving material that exposes private areas, and in case of full or partial nudity, or sexual acts.
The amended IT rules place the onus on both social media platforms as well as AI tools.
The issue of deepfakes and AI user harm came into sharp focus following the recent controversy surrounding Elon Musk-owned Grok allowing users to generate obscene content by using private pictures.
The ministry of electronics and information technology (MeitY) issued a gazette notification, amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The rules will come into force on February 20, the concluding day of India AI Impact Summit, a mega congregation that New Delhi will host as the nation prepares to take the centrestage in global AI conversation, according to PTI.
The tweaked rules explicitly bring AI content within the IT rules framework; they define AI-generated and synthetic content as one that by means audio, visual or audio-visual “is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event.”
Routine editing, accessibility improvements, and good-faith educational or design work are excluded from this definition.
User grievance redressal timelines have also been shortened.
The rules require mandatory labelling of AI content. Platforms enabling creation or sharing of synthetic content must ensure such content is clearly and prominently labelled and embedded with permanent metadata or identifiers, where technically feasible, as per the amended rules.
Calling for a ban on illegal AI content, it said platforms must deploy automated tools to prevent AI content that is illegal, deceptive, sexually exploitative, non-consensual, or related to false documents, child abuse material, explosives, or impersonation.
Intermediaries cannot allow removal or suppression of AI labels or metadata once applied, it said.
It requires stricter user disclosures. Intermediaries must warn users at least once every three months about penalties for violating platform rules and laws, including for misuse of AI-generated content.
Significant social media intermediaries must require users to declare whether content is AI-generated and verify such declarations before publishing.
AI-related violations involving serious crimes must be reported to authorities, including under child protection and criminal laws, it said.
One of the provisions for markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip, was in the earlier draft and has been dropped now in the final version.
The latest amendments geared to curb user harm from deepfakes and misinformation aims to impose obligations on two key sets of players in the digital ecosystem: one social media platforms and providers of AI tools such as ChatGPT, Grok and Gemini.
The IT Ministry had earlier highlighted that deepfake audio, videos, and synthetic media going viral on social platforms demonstrates the potential of generative AI to create “convincing falsehoods”, where such content can be “weaponised” to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.