AI Videos Blur Gap Between Real And Fake, Help Fraudsters
The AI generated videos are just too perfect to make out the difference if it is fake or the original.

Hyderabad:Recently, a fraudster had allegedly used an AI-generated video of Union finance minister Nirmala Sitharaman to create an advertisement to attract potential investors for his fraudulent investment schemes. The result: A 70-year-old doctor lost `20 lakh.
Welcome to the new world of Artificial Intelligence, where real and virtual merge seamlessly to create a reality that is difficult to distinguish for common people, especially those who belong to the old world.
Earlier, clues like lack of blinking and the poor lip sync do not apply anymore. The AI generated videos are just too perfect to make out the difference if it is fake or the original.
Experts and police officials said the users must stay cautious and rely on genuine and verified websites, if they want to avoid getting defrauded.
Speaking to Deccan Chronicle. cybercrime investigator and trainer Sandeep, the founder and CEO of Sytech Labs, said, “The faces which are artificially generated are either already existing on the Internet or completely created newly. These are being misused by fraudsters. On the flipside, the pictures are also created by using multiple applications and platforms where they can create a face based on physical attributes like race, national, complexion, features including height and facial features. The creators who create these videos are animators and graphic designers.”
Sandeep added that the videos are programming based and are designed by the experts. He said, “The creators usually are the animators. They already have the knowledge of the designs and graphics and apply programming to it and voila!”
Russians, Israelis and Americans have already acquired mastery in using AI platforms and creating AI-generated videos is a cake walk from them. Indians, on the other hand, are catching up, opined Sandeep. “Technology is only being misused. In the near future, the chances of differentiating between an original and fake are almost remote.”
A year ago too, actress Rashmika Mandanna’s deep fake video was also widely circulated in which a BTech graduate was arrested by the Delhi police.
According to D. Kavitha, DCP, Cybercrime, Hyderabad commissionerate, “Videos of high-profile persons like ministers, billionaires and top officials are also being circulated. One should think of the possibility that they can avoid falling prey to such cyber offenses. No ministers will release such videos or promote any profits. We also have an advisory which can act as clues to spot the differences.”
Nagalaxmi, DCP, cybercrime, Rachakonda Commissionerate, said one has to visit only verified applications, platforms or links. One must avoid fake websites, downloading anything suspicious will avoid being a victim of such cyber offenses. “One minute thinking before acting will also help,” said Pakeezha, admin SI.
BOX
How to spot AI fakes
1. Visual and audio clues
Face and body inconsistencies
Unnatural blinking or eye movement
Misaligned facial expressions with emotion or voice
Skin texture or lighting inconsistencies
Weird head movements or awkward body motion
Hands or fingers might look distorted or unnatural
Lip sync issues
Mouth movements may not match speech precisely
Artifacts and glitches
Blurring, pixelation, or weird edges, especially around faces
Flickering in the background or inconsistent lighting
Audio quality
AI voices may sound slightly robotic or too perfect.
Ambient sounds might not match the setting (e.g., no echo in a large room).
2. Use AI-Detection tools
Here are some tools that can help detect deep fakes or AI-generated videos:
Deepware Scanner: Scans videos for deep fake content
Hive Moderation: Detects manipulated media
Reality Defender: Real-time media verification
Microsoft Video Authenticator: Analyses videos for signs of manipulation
InVID (browser plugin):For verifying video sources
3. Metadata Analysis
Use tools like ExifTool to analyze video metadata
AI-generated videos might have missing, manipulated, or inconsistent metadata (like camera model, time/date).
4. Cross-Verification
Reverse search the video using Google Lens, TinEye, or InVID.
Check if the video exists on reliable news sites or social platforms.
Look for the original source, suspicious videos often lack credible origins.
5. Behavioral and contextual red flags
Sensational or highly emotional content that lacks source verification
Videos shared with vague or no background information
Lack of shadows, poor object interaction, or strange reflections