DC Edit | Digital Policy Gets Realistic
A three-hour removal mandate aims to curb AI deception but raises free-speech concerns.

The Central government’s new direction that mandates social media platforms to pull down deepfake content within three hours of being flagged marks a decisive intervention in India’s fast-evolving digital public policy.
Deepfake videos — powered by sophisticated artificial intelligence tools — have already demonstrated their capacity to damage reputations, incite communal tension, manipulate elections and scar private individuals. In a country as vast as India, the viral velocity of such content can turn a lie into a fact within minutes.
There is, therefore, a compelling public interest in ensuring that platforms act swiftly when maliciously altered videos target an individual or a community. The right to dignity and privacy, recognised by the Supreme Court as part of the fundamental right under Article 21, cannot be sacrificed for digital virality. A three-hour compliance window signals seriousness and pushes platforms to invest in faster detection, grievance redressal and human moderation.
However, the devil lies in implementation. Any law that grants the executive sweeping authority to order takedowns risks helping authoritarian regimes. The line between a malicious deepfake and politically inconvenient speech can, in practice, become dangerously thin. Democracies do not merely depend on order; they thrive on dissent, satire, exposure and uncomfortable truths. A regulation framed to curb digital deception must not become an instrument to suppress criticism.
India has witnessed how powers granted for legitimate security reasons can be misused. The legal framework permitting phone tapping for national security, for instance, was designed as an extraordinary measure. Yet, allegations have repeatedly surfaced that surveillance tools were deployed to eavesdrop on political opponents, activists and even private individuals for reasons far removed from national security, including blackmail.
The new anti-deepfake rule must, therefore, be accompanied by safeguards and clear definitions. Platforms must not become censors, nor should governments operate as unilateral arbiters of truth. Independent review mechanism — and time-bound appeals — therefore, are essential for protecting genuine criticism and free speech.

