The U.S. is sitting out an international initiative — sparked by the March terrorist attack on Muslims in Christchurch, New Zealand — aimed at halting the spread of terrorism and extremist content on social media, citing the need for freedom of speech.
“While the United States is not currently in a position to join the endorsement, we continue to support the overall goals reflected in the call,” the Trump administration said in a statement Wednesday. “We will continue to engage governments, industry and civil society to counter terrorist content on the internet.”
At a meeting Wednesday in Paris, hosted by French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern, five tech companies — Facebook, Microsoft, Twitter, Google and Amazon — signed on to the Christchurch Call to Action, which outlines a nine-point plan for steps the industry has committed to take to address the abuse of their platforms to spread terrorist content.
Among the changes: Facebook said it will ban users from live-streaming for 30 days after one violation of its policies prohibiting extremist speech or promoting terrorism. During the March 15 attacks on two mosques in Christchurch, one of the assailants used Facebook Live to broadcast a 17-minute video on the social network.
In its statement about declining to join the Christchurch Call to Action, the White House said, “We encourage technology companies to enforce their terms of service and community standards that forbid the use of their platforms for terrorist purposes. We continue to be proactive in our efforts to counter terrorist content online while also continuing to respect freedom of expression and freedom of the press.”
However, the White House continued, “we maintain that the best tool to defeat terrorist speech is productive speech, and thus we emphasize the importance of promoting credible, alternative narratives as the primary means by which we can defeat terrorist messaging.”
According to Facebook, the video live-streamed by the Christchurch attacker was viewed about 4,000 times in total before it was removed. In addition, in the first 24 hours, the company said it removed about 1.5 million videos of the attack globally (with 1.2 million of those videos blocked at upload).
Under their participation in the Christchurch Call to Action, Facebook, Google/YouTube, Twitter, Microsoft and Amazon pledged to take five individual actions and four collaborative ones.
The companies said they individually will update their terms of use “to expressly prohibit the distribution of terrorist and violent extremist content”; enhanced user-reported incidences of terrorist or violent extremist content; invest in technology to identify and remove such content; putting in place “appropriate checks” on live-streaming video aimed at cutting the risk of disseminating terrorist and violent extremist content; and release regular reports on the detection and removal of terrorist or violent extremist content.
Together, the companies said they will continue to work to improve technology to detect and remove terrorist and violent extremist content more effectively and efficiently; develop a protocol for responding to emerging or active events across all stakeholders; will educate the public about the issues; and support research into the impact of online hate on offline discrimination and violence.