Our daily visual landscape is dominated by two parallel streams of imagery. One offers authentic images and footage that reflect reality, covering politics, sports, news, and entertainment. The other consists of low-quality AI-generated content, often lacking meaningful human involvement. This category includes trivial and repetitive visuals—playful representations of celebrities, fantasy scenes, and anthropomorphised animals. This content's sheer volume and variety are astonishing, permeating everything from our social media feeds to WhatsApp messages. Consequently, this leads to a blurring and a proper distortion of
reality.
Fil Menczer first discovered what he refers to as "social bots" in the early 2010s while studying the spread of information on Twitter. His analysis revealed clusters of accounts that raised red flags; some shared the same post thousands of times while others repeatedly reshared content from various sources. In that moment, he thought, “These cannot be human.”
This realisation marked the start of his extensive exploration into the world of bots. As a respected professor of informatics at Indiana University in Bloomington, Menczer has examined how bots disseminate information, manipulate behaviour, and sow division among people. In 2014, he was part of a team that developed BotOrNot, a tool designed to assist users in identifying fake accounts online. Today, he is widely recognised as a leading authority in detecting and studying internet bots.
This rise of internet bots has gained momentum with the “ dead internet theory that has been going around since 2021. It is argued that the internet has become a vast, inhuman wasteland
dominated by algorithmically optimised, copycat content. The theory attributed this change to a hidden government conspiracy, making it easy to brush off. Yet, with the emergence of tools like ChatGPT, the theory now seems eerily prescient. The atmosphere on social media feels increasingly strange, searches yield lesser quality results, and entire AI-generated news platforms have emerged overnight. Companies like Meta Platforms Inc. envision a future where AI significantly creates content for Facebook and Instagram. Meanwhile, Wikipedia is struggling under the weight of AI crawlers searching for
new information to feed their models.
This results in a feedback loop where AI-generated content is created to satisfy AI-driven recommendation systems, potentially turning humans into mere spectators.
As algorithms rapidly evolve, they feed users more of what the system determines is interesting, making it increasingly difficult for anyone to curate their media consumption. Instead of engaging with an objective reality, users find themselves immersed in personalised, subjective worlds. This creates a strange disconnect, where the urgency to address the world's crises is dulled by how information is presented. In this context, people risk sleepwalking into disaster, not out of ignorance, but due to the paralysis induced by a filtered information ecosystem.
The article is authored by Nikhila Gayatri Kalla, a student of Christ University, Bengaluru interning with Deccan Chronicle.