Top

NZ gun incident shows how social media is used to share violent videos

Frustrated with years of similar obscene online crises, politicians around the globe voiced the same conclusion: social media is failing.

Christchurch: The Friday massacre at two New Zealand mosques, live-streamed to the world, was not the first time that violent crimes have been broadcast on the internet, but trying to stop the spread of a video once it has been posted online has turned into a virtual game of whack-a-mole.

The livestream of the mass shooting, which left 49 dead, lasted for 17 minutes. Facebook said it acted to remove the video after being alerted to it by New Zealand police shortly after the livestream began.

But hours after the attack copies of the video were still available on Facebook, Twitter and Alphabet Inc's YouTube, as well as Facebook-owned Instagram and WhatsApp.

Once a video is posted online, people who want to spread the material race to action. The New Zealand live Facebook broadcast was rapidly repackaged and distributed by internet users across other social media platforms within minutes.

Other violent crimes that have been live-streamed on the internet include a father in Thailand in 2017 who broadcast himself killing his daughter on Facebook Live. After more than a day, and 370,000 views, Facebook removed the video.

In the United States, the assault in Chicago of an 18-year-old man with special needs, accompanied by anti-white racial taunts, in 2017, and the fatal shooting of a man in Cleveland, Ohio, that same year, were also live-streamed.

Facebook has spent years building artificial intelligence and in May 2017 it promised to hire another 3,000 people to speed the removal of videos showing murder, suicide and other violent acts. Still, the problem persists.

Facebook, Twitter and YouTube on Friday all said they were taking action to remove the videos.

"Police alerted us to a video on Facebook shortly after the livestream commenced and we quickly removed both the shooter's Facebook and Instagram accounts and the video," Facebook tweeted. "We're also removing any praise or support for the crime and the shooter or shooters as soon as we're aware."

Twitter said it had "rigorous processes and a dedicated team in place for managing exigent and emergency situations" such as this. "We also cooperate with law enforcement to facilitate their investigations as required," it said.

YouTube said: "Please know we are working vigilantly to remove any violent footage."

Frustrated with years of similar obscene online crises, politicians around the globe on Friday voiced the same conclusion: social media is failing.

As the New Zealand massacre video continued to spread, former New Zealand Prime Minister Helen Clark in televised remarks said social media platforms had been slow to close down hate speech.

"What's going on here?" she said, referring to the shooter's ability to livestream for 17 minutes. "I think this will add to all the calls around the world for more effective regulation of social media platforms."

After Facebook stopped the livestream from New Zealand, it told moderators to delete from its network any copies of the footage.

"All content praising, supporting and representing the attack and the perpetrator(s) should be removed from our platform," Facebook instructed content moderators in India, according to an email seen by Reuters.

Users intent on sharing the violent video took several approaches - doing so at times with an almost military precision.

Copies of the footage reviewed by Reuters showed that some users had recorded the video playing on their own phones or computers to create a new version with a digital fingerprint different from the original. Others shared shorter sections or screenshots from the gunman's livestream, which would also be harder for a computer program to identify.

On internet discussion forum Reddit, users actively planned and strategised to avoid the actions of content moderators, directing each other to sharing platforms which had yet to take action and sharing downloaded copies of the video privately.

Facebook on Friday acknowledged the challenge and said it was responding to new user reports.

"To detect new instances of the video, we are using our artificial intelligence for graphic violence" as well as audio technology and looking for new accounts impersonating the alleged shooter, it said. "We are adding each video we find to an internal data base which enables us to detect and automatically remove copies of the video when uploaded."

Politicians in multiple countries said social media companies need to take ownership of the problem.

"Tech companies have a responsibility to do the morally right thing. I don't care about your profits," Democratic U.S. Senator Cory Booker, who is running for president, said at a campaign event in New Hampshire.

"This is a case where you're giving a platform for hate," he said. "That's unacceptable, it should have never happened, and it should have been taken down a lot more swiftly."

Britain's interior minister, Sajid Javid, also said the companies need to act. "You really need to do more @YouTube @Google @facebook @Twitter to stop violent extremism being promoted on your platforms," Javid wrote on Twitter. "Take some ownership. Enough is enough."

Next Story