Top

Tech This Week | Should India have its own social media content moderation rules?

Platforms are diverse and a one-size-fits-all approach will not work.

Earlier this week, Economic Times (ET) broke the story that top officials had begun preliminary discussions on whether India should have its own guidelines on content moderation. What always strikes me when I read news around this subject is that consensus around platform moderation is always negative. No one ever is happy with the current amount of content moderation that platforms are undertaking. Some people think it is too little, while others think it is too much.

To cut through the clutter, my immediate instinct is to look for the solution that is being proposed as a fix to moderation. In the ET story, the solution proposed by an unnamed official was that there needs to be a standard set of rules for all platforms.

On the surface, that idea might make sense to a lot of people. You can make the argument that platforms, such as Twitter, and Facebook should not be judges of speech around the world. Instead, that role should belong to the state. Each government that is and is not elected by the people has/should have its own set of policies around speech. It makes sense to apply those rules to the cyberspace and for platforms to follow them.

This principle is not universally accepted. Some states can be oppressive, and platforms with American values play an essential part in keeping free speech alive. Let us keep those principles here. Instead, let us focus on the challenges with having a standard set of rules for each platform.

There are three significant challenges to why governments may not be well suited to come up with coming up with content moderation guidelines. Firstly, having a standard set of rules that everyone can follow greatly undermines the diversity that exists on the internet. Tarleton Gillespie’s book, ‘Custodians of the Internet’, explains this brilliantly. All platforms moderate. This includes app stores, run by Apple, Google, Sony, Samsung, e-commerce websites, such as Amazon, Flipkart, streaming services, such as Netflix, and Prime Video, and of course, social media websites such as Facebook, Twitter, and Reddit.

App stores, e-commerce platforms, streaming services, and social media platforms are very different in the services that they provide. Even within these clear-cut distinctions, companies have attitudes and challenges towards moderation. For instance, a platform like Reddit has been historically committed to free speech, not just the admin team but also the kind of community it ended up amassing. Historically, this has included pages like r/jailbait and r/thedonald, which are worth Googling for a healthy session of doom scrolling. However, these communities may have never existed on Instagram and YouTube, thanks to their admin team, communities, and attitudes to free speech.

Not only are not all platforms not equal and cannot be grouped under a standard set of rules, people’s attitudes towards speech are not always in unanimous agreement. For instance, something that might be free speech for A might qualify as offensive for B. In the ET times story, that instance happened to be Twitter allegedly terming the proposed Ram Mandir ‘controversial’ in their curated news section. More often than not, moderation tasks can be cut out to be strictly against community guidelines. For instance, child pornography and videos glorifying terrorism are textbook cases.

However, the problem exists at the margins and at scale. For instance, if one in a million posts are problematic on Twitter, that can be around 500 pieces of content a day, even more for platforms like Instagram and Facebook. And these are not easy decisions to make. For instance, the Pulitzer Prize-winning photo, Napalm Girl, displays the atrocities of war but can also be taken down under community guidelines for being obscene. As a result, platforms are revising their policies all the time to keep them up to date.

Lastly, it may not always seem like it, but the internet is filled with new opportunities for innovation. When a new platform comes about, they need to have their own approach to moderation. For example, when Google Glass came about, Google needed to ban porn apps designed for the platform.

All of these challenges, the different kinds of platforms, their diverse nature, people’s perceptions, and the possibility of innovation make content moderation a challenging debate to solve. As is evident from the different number of platforms that exist, this is not a challenge that can be solved centrally. Companies that tackle these issues need a certain amount of flexibility when dealing with such challenges, and that can be hard to achieve through a standard set of rules laid out by the government. It is not the textbook cases that end up being the problem, but the content on the edges that makes platforms better suited to moderation.

Next Story