Innovation

How Social Media Giants Are Handling Hateful Content After New Zealand Terror Attack

Hundreds of thousands of videos depicting the New Zealand terror attack proliferated online, reopening the debate about regulating tech platforms.

“It could just take one copy of the video to be shared to a particular community of 800 extremists to have an inordinate impact,” Shorenstein Center fellow Dipayan Ghosh explained.

The gunman who killed 50 people at two New Zealand mosques live-streamed the attack on Facebook for 29 minutes before the platform took it down. In the days after the attack, copies of the video, some of which were altered to bypass automatic content moderation systems, began appearing all over the internet. Facebook said on March 16th that it removed 1.5 million versions of the video within 24 hours of the attack—but 300,000 versions of it were left up for users to potentially see.

The situation highlights the problems of how tech companies like Facebook and Twitter can control the viral spread of hateful content it provokes. The New York Times’ Charlie Warzel explained that the companies will likely have to have difficult questions regarding if the scale and success of their platforms is worth the danger and hate they can potentially cause.

The Chairman of the House Homeland Security Committee also sent letters to Facebook and YouTube on March 19 asking for a briefing on their response to the video of the New Zealand attacks.