Leading tech companies have joined world leaders to bolter efforts to drive extremist content off on the Internet, as social-media firms come under increased scrutiny in the wake of deadly incidents.
Facebook on September 23 announced the initiative at the United Nations, where French President Emmanuel Macron and New Zealand Prime Minister Jacinda Ardern led a meeting concerning online extremist risks.
Facebook, Twitter, Microsoft, and YouTube in 2017 formed the Global Internet Forum to Counter Terrorism, an informal alliance to deal with the most dangerous material on social media.
But tech companies came under increased scrutiny since a white supremacist in March killed worshipers at two mosques in Christchurch, New Zealand. The shooter live-streamed the deadly rampage and posted an online manifesto filled with racist conspiracy theories.
As part of the new efforts, Facebook said that the forum set up in 2017 would now be considered an independent body and have a dedicated staff under a yet-to-be-named executive director to deal with extremist content.
Nongovernmental groups will head an advisory board to the effort, and the governments of the United States, France, Britain, Canada, New Zealand, and Japan and UN and European Union experts will also advise.
Facebook last week announced a series of steps to limit hate speech and extremism on its site.
The company said it would expand its definition of terrorist organizations and that it planned to deploy artificial intelligence to better spot and block live videos of shootings.