Top

Tech This Week | Will Facebook's Supreme Court' make the internet a safer place?

Best case scenario: The body achieves incremental progress by laying out key principles that guide Facebook's content moderation efforts.

Earlier last week, Facebook’s Oversight Board (often dubbed Facebook’s Supreme Court) announced co-chairs and first twenty members. The board will allow users to appeal removal of their posts, and upon request will also issue advisory opinions to the company on emerging policy questions.

Why did we get here? With billions of users, Facebook has had a content moderation problem for a while. In an ideal world, good posts would stay up, and bad posts would be pulled down. But that is not how it works. When it comes to Facebook posts, morality is not always black and white. Arguments can be made on either side for most posts regarding where the right to free speech ends. Similarly, whether politicians should be allowed to lie in ads.

Status quo has historically dictated that Facebook takes these decisions and the world goes on. However, that process generally has been perceived like a black box. There hasn’t been a lot of transparency around how these decisions are taken, apart from the minutes of Facebook’s Product Policy Forum, which is a mixed bag.

An intended and anticipated consequence of this board is that it will instil more transparency into the process of what stays up and why. By reporting on what the board discussed and did not discuss, it will help bring more clarity around the most prevalent problems on the platform. It may help tell us whether bullying is a bigger problem than hate speech or how (or where) harassment and racism manifest themselves.

There is the issue of whether the decisions taken by the board will be binding. Mark Zuckerberg claimed that “The board’s decisions will be binding, even if I or anyone at Facebook disagrees with it,” so it is safe to say that Facebook vows they will be. The board will have the power to remove particular pieces of content. The question is whether the board’s judgements will also apply to pieces of content that are either similar or identical. Otherwise it would make no sense for the board to pass a decision on every single piece of content on Facebook.

Regarding this, Facebook’s stance is, “in instances where Facebook identifies that identical content with parallel context — which the board has already decided upon — remains on Facebook, it will take action by analysing whether it is technically and operationally feasible to apply the board’s decision to that content as well”.

In simple speak, board members (who will not all be computer engineers) may make recommendations that cannot be implemented across the platform. In which case, Facebook will not go ahead with replicating the decision for every single piece of decision on the platform. Also, in case the board does go ahead with an extremely radical recommendation (say, shut down the like button), Facebook can ignore that.

On the bright side, as far as content moderation is concerned, there seems to be little reason for Facebook to go against the decision of the board anyway, considering the body has been established to take this responsibility (and blame) off Facebook’s hands.

The billion dollar question is whether it will make Facebook a safer place. The short answer is no (followed by too early to say). The board is only going to be able to her a few dozen cases at best. New members of the board have committed to an average of 15 hours a month to the job, which is to moderate what stays up for a user base of 3 billion people. Even if the members were full time, the number of cases the board would have been able to see and pass judgement on would have been a drop in the ocean. Based on how the body is structured, it makes sense for the members to deliberate on the most high profile or charged cases (such as political advertising or the presence of deepfakes on the platforms).

It has historically been a hard process for society move the needle forward, and the board an attempt to do just that. The best case scenario here is that the body achieve incremental progress by laying out key principles that guide Facebook’s content moderation efforts. As far as whether the board will make Facebook (and by extension, the internet) a safer place, it is too early to say but seems unlikely. For every high profile deepfake of Nancy Pelosi or Mark Zuckerberg, there are hundreds of thousands of content moderation decisions that need to be made. Low profile instances of misinformation, bullying, harassment, and abuse plague platforms like Facebook, Instagram, and WhatsApp and will not magically cease to exist.

Instead, content moderation at Facebook is going to be a long fraught battle, led by the board. This is the beginning of one of the world’s most important and consequential experiments in self-regulation. Time will tell how it shapes up.

( Source : Deccan Chronicle. )
Next Story