Why Women Are Slower to Swoon Over AI Despite Its Boom

India has had its fair share of AI controversies, one of which was the deepfake video of actor Rashmika Mandanna in 2023

Update: 2025-12-28 08:43 GMT


2025 was the year of Artificial Intelligence. We have had everything from Ghibli portraits, AI cabinet ministers, human replacement/extinction fear and suicide controversies in the past year - all related to AI. Undeniably, it is now part of everyday life, and those who are not jumping on the bandwagon are left behind - far, far behind.

One of the biggest demographics among those who are unwilling to do the jump, turns out, is women. Only 27% of ChatGPT mobile app downloads come from women between 2022 and 2024. Between 2022 and 2024, women accounted for roughly 42% of global users of ChatGPT and Perplexity websites, and just 31% of Anthropic’s.

Many would be quick to make the generalisation that “women don’t like tech”, but to those who remove the misogyny-laden glasses and look closely, the reason is more socially relevant.

According to a recent Harvard Business School meta-analysis of 18 studies, women are 22% less likely to use generative AI websites and apps, both at work and for personal life. Data shows that this trend holds true across countries, sectors and occupations. We are, of course, talking about those on the privileged side of the digital divide.

“Most women's first brush with AI was news about some deepfake, or pornographic image created with AI, says Anwesha Paul, a technical writer based in Hyderabad.

“Then came stories of biases that AI has inherited from society. I remember the viral social media post about how if you ask ChatGPT to suggest ten best authors ever, it would only suggest you the names of white male authors. All in all, with systemic biases embedded into their training data, AI becomes another facet of the misogynistic society women and underprivileged communities have to deal with,” she adds.

India had its fair share of AI controversies, one of which was the deepfake video of actor Rashmika Mandanna in 2023. Mandanna’s face was morphed onto an Instagram video posted by a British-Indian woman named Zara Patel.

While the accused was promptly arrested, the incident raised the question of how this could potentially affect women who lack the resources to fight something like this. "If this happened to me when I was in school or college, I genuinely can't imagine how could I ever tackle this," Mandanna wrote on social media after the incident.


Gendered hostilities

Starting from Photoshop fearmongering, metaverse rapes, and deepfakes of well-known celebrities, tech has never been friendly to women. We now stand at the tech-evolution point where Grok, the AI powered by Musk’s X, is making sexually explicit content on demand, and is less restrictive on these things than ChatGPT is.

The AI tools most hiring teams use tend to be biased against women candidates, a Cornell University study has found. The LLM models tend to favour men, especially for higher-paying jobs. Generative AI is, to no one’s surprise, less likely to suggest the names of women experts, poets, artists, etc, when you ask for a lit, unless you specify you want women included in it. When asked to create an image of a secretary or a nurse, they usually generate women, but when asked to depict a manager, doctor, or professor, they usually generate men.

“It's actually scary how big data is farming so many pictures online and creates extremely new characters out of it. This is especially concerning because a lot of these AIs are trained to make fake porn videos out of real women's pictures without them ever actually coming to know about it,” says Ambika Pradhan, a research student based in Vienna.


Is AI inherently misogynistic?

Machines, in this case LLMs, do not have inherent social or moral value that they display out of their own conviction. “To call these machines misogynistic the same way we would call a person misogynistic is misleading. It makes the AI systems look like autonomous moral agents who can act with their own misogynistic ends. This is not true,” says Gaya Hadiya, a research student from IIT Dharward who focuses on AI ethics.

The increasing popularity of “Dark AI” as opposed to ethical AI is also a worrying trend. With GenAI being more and more accessible, those with malicious intent are readying their arsenal too. Club this with the increasing threat of online manospheres and radicalisation of men into Andrew Tate fans, and it is no surprise that these spaces don't look friendly to women.

“While these systems provide results that are misogynistic, it is of utmost significance that we recognise these results as caused by human intentions. Ultimately, it is the misogyny of the people that translates into the functioning of the machines people make,” Gaya adds.


Tags:    

Similar News