Your browser is currently blocking notification.
Please follow this instruction to subscribe:
Notifications are already enabled.

Facebook Tightens Live Streaming Regulations Following NZ Shooting Controversy

Facebook Tightens Live Streaming Regulations Following NZ Shooting Controversy

The stricter rules apply for live streaming content which spread hate

Live streaming has been under the scanner after the NZ terror attack in March this year

Facebook is also collaborating with three universities to research on content violations

Following international pressure from a group of world leaders and tech personalities, Facebook has introduced policies restricting guideline-violating users from using Facebook’s  live streaming feature.

The company had been in talks with the Christchurch Call, a formal group led by Jacinda Ardern, New Zealand prime minister, along with support from French president Emmanuel Macron and other world leaders. The organisation was formed to take social media companies to task and get them to act faster on objectionable content. The group is calling for Facebook to put measures in place to stop a repeat of what happened after the Christchurch terror attack in New Zealand on March 15.

After the call by the international group, Facebook introduced stricter policing for Facebook Live content. Guy Rosen, VP, Integrity at Facebook, said the company would take action against any user that violates the serious policies put in place for terror-related or hate content. Such users will be restricted from using Live for a duration and repeat offences could lead to permanent suspension. “Someone who shares a link to a statement from a terrorist group with no context will now be immediately blocked from using Live for a set period of time,” Rosen wrote in a blog post.

The company added that anyone who has violated community guidelines or rules, including Facebook’s Dangerous Organizations and Individuals policy, will be restricted from using Facebook Live. Further, such violators could also be barred from using Facebook services like Ads.

Fallout From NZ Terror Attack

The issue around moderating live streams was became a world-wide debate after the NZ terror attack earlier this year. On March 15, a lone gunman shot and killed 51 people and injured several others in two mosques in Christchurch. The attacker also live-streamed the entire shooting on Facebook for close to 20 minutes, before it was taken down by the social network. By that time, the video had been reuploaded millions of times by other users on Facebook, news channels and other social media platforms, despite the best efforts of tech companies to take it off the internet.  Facebook had said that it removed 1.5 Mn videos related to the attack in the first 24 hours.

Founded by Ardern, Christchurch Call, is looking to enforce changes and laws that ban objectionable material and provide a framework for social media and mass media on how to report on terrorism subjects without amplifying the issue.

“Facebook’s decision to put limits on live streaming is a good first step to restrict the application being used as a tool for terrorists and shows the Christchurch Call is being acted on,” NZ prime minister Ardern told Reuters

Facebook is also collaborating with The University of Maryland, Cornell University and the University of California, Berkeley to study manipulation of images, audio and video. The collaboration is meant to unearth AI capabilities to distinguish genuine posts from and intentionally-manipulated content. The company is investing $7.5 Mn in these new research partnerships.