Twitter, Facebook to pull extremist posts following Nice attack
Twitter moved swiftly to remove posts from Islamic extremists glorifying a truck attack in Nice, France, watchdog groups said on Friday, in a rare round of praise for a platform that has often struggled to contain violent propaganda.
A spate of violence over the past several months has posed numerous challenges to social media companies. Friday's unsuccessful military coup in Turkey was marked first by restrictions on social media, internet monitoring groups said, but the crackdown appeared to ease as the events unfolded and numerous citizens broadcast live video on Facebook and sent tweets.
US and French authorities on Friday were still trying to determine whether the Tunisian man who drove a truck into Bastille Day crowds on Thursday, killing 84 people, had ties to Islamic militants.
At least 50 Twitter accounts praising the attacks used the hashtag Nice in Arabic, according to the Counter Extremism Project, a private group that monitors and reports extremist content online. Many accounts appeared almost immediately after the attack and shared images praising the carnage, the group said.
The pattern was similar to what was seen on Twitter after attacks last year and earlier this year in Paris and Brussels. But Twitter, which once took a purist approach to free speech but has since revised its rules, took action much more quickly this week. Twitter has always had policies banning violent content, such as advocacy of terrorism, and recently made them more explicit.
‘Twitter moved with swiftness we have not seen before to erase pro-attack tweets within minutes,’ Counter Extremism Project said in a statement. ‘It was the first time Twitter has reacted so efficiently.’
Rabbi Abraham Cooper, head of the Simon Wiesenthal Center's Digital Terrorism and Hate project, also said Twitter had responded with unusual alacrity.
Twitter did not provide any information about account suspensions, but said in a statement that it condemns terrorism and bans it on its site.
Twitter, Facebook and other internet firms have ramped up their efforts over the past two years to quickly remove violent propaganda that violates their terms of service.
Both companies continue to face major challenges in distinguishing between graphic images that are shared to glorify or celebrate attacks and those shared by witnesses who are documenting events.
Facebook's ‘community standards’ dictate what types of content are and are not allowed on the platform. Those standards explicitly ban ‘terrorism’ and related content, such as posts or images that celebrate attacks or promote violence.
Yet the company's policies around graphic images are more nuanced. Facebook, like most large internet companies, relies on users and eagle-eyed advocacy groups to report objectionable content to teams of human editors, who then review each submission and decide whether a post should be deleted.
At Facebook, those reviewers receive more specific guidance beyond the public community standards when it comes to deciding what to do with reported graphic images, a spokeswoman said. But she declined to elaborate on the company's criteria.
‘One of the most sensitive situations involves people sharing violent or graphic images of events taking place in the real world. In those situations, context and degree are everything,’ Facebook said in a blog post last week.
New Tactics
Internet companies have continually updated their terms of service over the past two years to establish clearer and in many cases stricter ground rules on what content is permissible on their platforms.
In response to pressure by US lawmakers and counterextremism groups, Facebook and YouTube have moved recently toward implementing some automated processes to block or rapidly remove Islamic State videos and similar material.
That has not stopped Islamist militants from celebrating attacks online and even updating their tactics. Some Islamic State supporters used Twitter hashtags that were trending globally to celebrate the Nice attacks, such as #PrayForNice, #NiceAttack and #Nice, so that their tweets were shown to a wider audience, according to screenshots from the Wiesenthal Center.