Spectacle of AI Tests Boundaries of Truth
The push for disclosure follows a run of digital hoaxes that felt almost theatrical.
Hyderabad: As Guy Debord once wrote, the Spectacle is not a collection of images but a social relation among people mediated by images. Now that mediation has learned to speak, sing and even paint in our voices. The line between what is seen and what is real has thinned into static. This is the world of artificial intelligence (AI) and the rise of AI-generated content.
This week, the Indian government proposed new rules requiring every post, picture, song or video created with AI to be clearly labelled. While the idea sounds simple enough, can authenticity be legislated?
The push for disclosure follows a run of digital hoaxes that felt almost theatrical. A deepfake of actor Rashmika Mandanna went viral on social media, and this month, artist Abhay Sehgal was accused by peers of passing off AI collages as oil paintings. “He has been stealing other artists’ work and calling it original,” one user wrote. These paintings were even sold to celebrities, including Ranbir Kapoor. The artists’ community pointed out how Abhay was not only using AI to create his works but also stealing other artists’ pixels to claim them as “originals.” Many singers on social media have also been called out for uncanny renditions of Arijit Singh and Sonu Nigam that relied on cloned voices.
Each of these episodes is alarming since they raise concerns about what counts as truth. Courts in Delhi and Bombay described deepfakes as a menace “virtually impossible to discern.” While a few platforms like Instagram had begun labelling AI content, a Stanford study this year found that even when users saw a label saying “AI-generated,” most still believed what they saw. If seeing once meant believing, that bond has frayed.
Under the draft rules, social-media companies and creators must disclose and tag synthetic material. Images must carry a label across at least ten per cent of their area, audio and video for ten per cent of their length. Platforms with more than five million users will have to collect declarations from uploaders and verify them. A false claim could invite takedowns and loss of legal protection.
Lawyer Aditya Kashyap calls the step overdue but unwieldy. He notes that India still prosecutes deepfakes under scattered laws on obscenity and fraud, while copyright itself predates the algorithmic era. “You can’t match the speed of generative tools with provisions written for film reels,” he says. “We need penalties that recognise scale and intent, and a body that understands both.”
Technologists echo the doubt. “Detection is also AI,” says Rajat C, an engineer at a global tech firm. “If it’s easy today, tomorrow another model comes out. It’s always a catch-up game.” He describes experiments where devices could sign data at the moment of creation so viewers can verify a source, but even that, he warns, would blur fast. “Every photo already has some level of AI. Those filters on Instagram or that excellent phone camera are because of AI, not because the camera is high quality. So what’s real?”
Artists find the debate both comic and cruel. Designer Aatmashri Sanyal recalls her reaction to the Sehgal scandal. “I made a reel joking that I felt left out because they didn’t steal my work,” she says. “Not being plagiarised almost felt like I hadn’t made it.” She supports labelling but wants nuance. “During my internship days, we used AI to replace stock photos, not real art. That feels ethical. What worries me is when people just generate entire posters and call it design.” She suggests a consent-and-royalty model where artists choose if their work can train algorithms and are paid each time it does.
“Every AI output should disclose its source and modification history,” says Kashyap. “Intermediaries must maintain audit trails, respond quickly to takedown requests, and submit transparency reports.” He argues that India should form a national mission to track synthetic media. “AI should stay a tool for creativity, research and governance. The problem begins when deception becomes the product.”
While the rules are a step forward, they leave many questions unanswered, and the spectacle continues. A world built on imitation is learning to impersonate itself, while the law chases it to ask what truth looks like this time.
Gfx:
1. Deepfakes are now “virtually impossible to discern,” according to the Delhi and Bombay High Courts.
2. The spread of synthetic media has raised serious concerns about what counts as truth online.
3. A Stanford study showed that even when content was labelled “AI-generated,” most users still believed it.
4. India’s new draft rules require all AI content to be clearly tagged — 10 per cent label area for images and 10 per cent duration for audio or video.
5. Social-media platforms with over five million users must collect and verify declarations from uploaders, with penalties for false claims including takedowns and loss of legal protection.
6. Lawyer Aditya Kashyap says current laws on obscenity and fraud are outdated for the AI era and urges new penalties that reflect scale and intent.