Top

Artificial Intelligence: Double Trouble

AI has made it disturbingly easy for bad actors to manipulate images

The perils of posting your children’s pictures on social media are well known… and things are getting worse.

Diffusion Models, a new AI tool, is the reason. Using this tool, tech-savvy predators are using real-life pictures from the Internet, including shots featured on social media sites and personal blogs, and generating pictures of children engaged in sexual acts. It is almost impossible to determine whether an image is real or generated by AI.
And what is most worrying — these images have the potential to disrupt tracking systems that block child sexual abuse materials from the web.

How it works

Diffusion Models generates unique images by learning how to de-noise, or reconstruct. Depending on the prompt, a diffusion model can create wildly imaginative pictures based on the statistical properties of its training data within seconds. With a single command, the diffusion model can rapidly generate several images.

The tool does not require much technical sophistication, unlike in the past, when children’s faces were superimposed onto adult bodies using deepfakes.

Alarm bells in USA

In the United States, thousands of AI-generated child-sex images have been found on forums across the dark web, a layer of the Internet visible only with special browsers. Shockingly, some participants even shared detailed guides for other paedophiles to make their own creations using the technology.

‘Floodgates have opened’

The floodgates have already opened, says Srijan Kumar, a computer scientist and an assistant professor at the Georgia Institute of Technology. “AI digital tools allow manipulation at a scale that was not possible before. Fake images have been around for a while, but now they can directly target an incredibly high number of potential victims. It hardly takes any time,” says Srijan, who was honoured with the ‘Forbes 30 under 30’ award for his work on social media safety and integrity.

So do we have detection and mitigation solutions? “There are some, but they are preliminary and do not capture the different varieties of generated content. One main challenge is that as soon as new detectors are created, the generative technology evolves to incorporate ways to avoid detection from those detectors,” explains the computer scientist, who describes it as an ever-evolving arms race of sorts.

Srijan says generative AI models have reduced the cost of creating not only high-quality good content but also bad content. “Now bad actors can easily create realistic-looking, but entirely fabricated, misinformative content,” he says.

Emerging technology

Stable Diffusion is the most flexible AI image generator and is entirely open source. In the West, authorities believe that Stable Diffusion, which can be run in an unrestricted manner, is what most bad actors are relying on to generate exactly the kind of images they want. The deep generative artificial neural network is said to be a significant advancement in image generation, unlike previous text-to-image models such as DALL-E and Midjourney which were accessible only through cloud services.

Sharenting perils

Nirali Bhatia, Cyber Psychologist and Psychotherapist, feels ‘Sharenting’ (the act of parents sharing information, photos, and videos of their children on social media platforms) has become increasingly prevalent due to the growing social media culture.

“Parents enjoy documenting their children’s lives and feeling proud when others react positively to their posts,” she says.

There is also a commercial aspect, with some parents aspiring to become influencers or brand ambassadors, leading them to share even more of their child's life online.

While sharenting has its benefits, like sharing joyous moments and feeling a sense of belonging, it also raises concerns about privacy, consent, and security risks.

“It can expose personal details and create a long-lasting digital footprint with potential consequences in the future. The growing use of AI-powered tech and algorithms can victimise your child with a deepfake image or video and even track and analyse online behaviour to identify potential targets for predatory behaviour,” explains Nirali, Founder of CyberBAAP (anti Cyberbullying organisation).

She says it is extremely important for parents to be cautious about what they share. “Limiting the audience and avoiding sharing sensitive information can help protect children from potential risks,” she feels.

Schools do their bit

“More than the parents, it’s the students that need to be educated on Digital Hygiene,” feels Skand Bali, Principal of the Hyderabad Public School, Begumpet. He says HPS has a Tech Club which is responsible for initiating open and honest dialogue about Internet safety, digital ethics, and the responsible use of technology. “Through special monthly sessions, quizzes and regular engagement in classrooms we encourage students to share any concerns they might have encountered while using online platforms,” says Bali.

Next Story