Top

Here's how Facebook is countering terrorism

Facebook wants Facebook to be a hostile place for terrorists.

In the wake of recent terror attacks, people have questioned the role of tech companies in fighting terrorism online. Facebook wants to answer those questions head on. “We agree with those who say that social media should not be a place where terrorists have a voice. We want to be very clear how seriously we take this — keeping our community safe on Facebook is critical to our mission”, says Monika Bickert, Director of Global Policy Management, and Brian Fishman, Counterterrorism Policy Manager, on a Facebook blogpost.

Let’s walk through some of Facebook’s behind-the-scenes work, including how they use artificial intelligence to keep terrorist content off Facebook, something they have not talked about publicly before. The topic will also discuss the people who work on counterterrorism, some of whom have spent their entire careers combating terrorism, and the ways we collaborate with partners outside the company.

Our stance is simple: There’s no place on Facebook for terrorism. Facebook removes terrorists and posts that support terrorism whenever they become aware of them. When they receive reports of potential terrorism posts, they review those reports urgently and with scrutiny. And in the rare cases when they uncover evidence of imminent harm, they promptly inform the authorities. Although academic research finds that the radicalization of members of groups like ISIS and Al Qaeda primarily occurs offline, Facebook knows that the internet does play a role — and they don’t want Facebook to be used for any terrorist activity whatsoever.

“We believe technology, and Facebook, can be part of the solution. We’ve been cautious, in part because we don’t want to suggest there is any easy technical fix. It is an enormous challenge to keep people safe on a platform used by nearly 2 billion every month, posting and commenting in more than 80 languages in every corner of the globe. And there is much more for us to do. But we do want to share what we are working on and hear your feedback so we can do better,” adds Bickert.

Artificial Intelligence

Facebook wants to find terrorist content immediately, before people in the community have seen it. Already, the majority of accounts they remove for terrorism, are found by them. But Facebook knows they can do better at using technology — and specifically artificial intelligence — to stop the spread of terrorist content on Facebook.

“Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off Facebook. We are currently focusing our most cutting edge techniques to combat terrorist content about ISIS, Al Qaeda and their affiliates, and we expect to expand to other terrorist organizations in due course. We are constantly updating our technical solutions, but here are some of our current efforts,” said Bickert.

Image matching: When someone tries to upload a terrorist photo or video, Facebook’s systems look for whether the image matches a known terrorism photo or video. This means that if they had previously removed a propaganda video from ISIS, they can work to prevent other accounts from uploading the same video to the site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.

Language understanding: Facebook has also recently started to experiment with using AI to understand text that might be advocating for terrorism. They are currently experimenting with analysing text that they have already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so they can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts. The machine learning algorithms work on a feedback loop and get better over time.

Removing terrorist clusters: “We know from studies of terrorists that they tend to radicalise and operate in clusters. This offline trend is reflected online as well. So when we identify Pages, groups, posts or profiles as supporting terrorism, we also use algorithms to “fan out” to try to identify related material that may also support terrorism,” said Bickert. Facebook uses signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.

Recidivism: Facebook has also gotten much faster at detecting new fake accounts created by repeat offenders. Through this work, they have been able to dramatically reduce the time period that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. “We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly,” said Bickert.

Cross-platform collaboration: Because Facebook does not want terrorists to have a place anywhere in the family of Facebook apps, they have begun work on systems to enable them to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram. Given the limited data some of their apps collect as part of their service, the ability to share data across the whole family is indispensable to their efforts to keep all the platforms safe.

Human Expertise

AI can’t catch everything. Figuring out what supports terrorism and what does not isn’t always straightforward, and algorithms are not yet as good as people when it comes to understanding this kind of context. A photo of an armed man waving an ISIS flag might be propaganda or recruiting material, but could be an image in a news story. Some of the most effective criticisms of brutal groups like ISIS utilize the group’s own propaganda against it. To understand more nuanced cases, Facebook needs human expertise.

Bickert gives some more insights on how they do it from the backend.

Reports and reviews: Our community — that’s the people on Facebook — helps us by reporting accounts or content that may violate our policies — including the small fraction that may be related to terrorism. Our Community Operations teams around the world — which we are growing by 3,000 people over the next year — work 24 hours a day and in dozens of languages to review these reports and determine the context. This can be incredibly difficult work, and we support these reviewers with onsite counseling and resiliency training.

Terrorism and safety specialists: In the past year we’ve also significantly grown our team of counterterrorism specialists. At Facebook, more than 150 people are exclusively or primarily focused on countering terrorism as their core responsibility. This includes academic experts on counterterrorism, former prosecutors, former law enforcement agents and analysts, and engineers. Within this specialist team alone, we speak nearly 30 languages.

Real-world threats: We increasingly use AI to identify and remove terrorist content, but computers are not very good at identifying what constitutes a credible threat that merits escalation to law enforcement. We also have a global team that responds within minutes to emergency requests from law enforcement.

Partnering with Others

Working to keep terrorism off Facebook isn’t enough because terrorists can jump from platform to platform. That’s why partnerships with others — including other companies, civil society, researchers and governments — are so crucial.

Industry cooperation: In order to more quickly identify and slow the spread of terrorist content online, we joined with Microsoft, Twitter and YouTube six months ago to announce a shared industry database of “hashes” — unique digital fingerprints for photos and videos — for content produced by or in support of terrorist organizations. This collaboration has already proved fruitful, and we hope to add more partners in the future. We are grateful to our partner companies for helping keep Facebook a safe place.

Governments: Governments and inter-governmental agencies also have a key role to play in convening and providing expertise that is impossible for companies to develop independently. We have learned much through briefings from agencies in different countries about ISIS and Al Qaeda propaganda mechanisms. We have also participated in and benefited from efforts to support industry collaboration by organizations such as the EU Internet Forum, the Global Coalition Against Daesh, and the UK Home Office.

Encryption: We know that terrorists sometimes use encrypted messaging to communicate. Encryption technology has many legitimate uses – from protecting our online banking to keeping our photos safe. It’s also essential for journalists, NGO workers, human rights campaigners and others who need to know their messages will remain secure. Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages — but we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies.

Counterspeech training: We also believe challenging extremist narratives online is a valuable part of the response to real world extremism. Counterspeech comes in many forms, but at its core these are efforts to prevent people from pursuing a hate-filled, violent life or convincing them to abandon such a life. But counterspeech is only effective if it comes from credible speakers. So we’ve partnered with NGOs and community groups to empower the voices that matter most.

Partner programs: We support several major counterspeech programs. For example, last year we worked with the Institute for Strategic Dialogue to launch the Online Civil Courage Initiative, a project that has engaged with more than 100 anti-hate and anti-extremism organizations across Europe. We’ve also worked with Affinis Labs to host hackathons in places like Manila, Dhaka and Jakarta, where community leaders joined forces with tech entrepreneurs to develop innovative solutions to push back against extremism and hate online. And finally, the program we’ve supported with the widest global reach is a student competition organized through the P2P: Facebook Global Digital Challenge. In less than two years, P2P has reached more than 56 million people worldwide through more than 500 anti-hate and extremism campaigns created by more than 5,500 university students in 68 countries.

Our Commitment

Bickert adds that Facebook is committed to eradicate terrorism from their websites. “We want Facebook to be a hostile place for terrorists. The challenge for online communities is the same as it is for real world communities – to get better at spotting the early signals before it’s too late. We are absolutely committed to keeping terrorism off our platform, and we’ll continue to share more about this work as it develops in the future,” concludes Bickert.

( Source : deccan chronicle )
Next Story