Taking a cue from Twitter, global social media giant Facebook has decided to tackle the problem of deepfake videos by marking them so that users can be aware of them. The social media giant said that it is not completely removing such videos as they might be available on other platforms as well.
Deepfake videos are videos that have been manipulated to distort reality generally by morphing faces. Although this is not a popular phenomenon yet, it is possible through artificial intelligence (AI) tools and “deep learning” techniques.
“While these videos are still rare on the Internet, they present a significant challenge for our industry and society as their use increases,” Facebook said in a blog post.
In order to put a full stop to all such deepfake content, Facebook will follow several steps. First of all, the company has decided to investigate AI-generated content and deceptive behaviours like fake accounts. Secondly, Facebook has decided to partner with academia, governments, and industies to expose people sharing such distorted content on social media.
Facebook claimed to be in talks with 50 global experts with policy, technical, media, legal, civic and academic backgrounds to upgrade the detection of deepfake media and strengthening its policy.
What Is Facebook’s Idea Of Deepfake?
So far, the company has jotted down two criteria for a post to be classified as deepfake. First, the video has been edited beyond adjustment for clarity or quality and has the potential to mislead the users. Secondly, the video is a product of AI or machine learning that replaces, merges or superimposes content on to a video, making it more authentic.
Facebook also clarified that the policies will not be extended to parody or satire accounts. Moreover, any video that has been edited just to change the order of the words will not come under the ambit of the policy.
Any deepfake video will be still eligible for review by Facebook’s independent third-party fact-checkers across the world. The company has over 50 partners for independent fact-checking, in over 40 languages.
“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad. And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false,” Facebook added.
How Did The Battle Against Deepfake Content Begin?
The move comes after Facebook launched its “Deep Fake Detection Challenge” in September 2019 to encourage researchers and innovators to do more research and open-source tools to detect deepfake. The project was supported by $10 Mn in grants and included opportunities like the coalition of organisations such as Partnership on AI, Cornell Tech, the University of California Berkeley, MIT, WITNESS, Microsoft, the BBC and AWS, among others.
In November 2019, Twitter also announced a draft policy to prevent deepfake media in the form of images, audios, and videos from its platform. The social media platform also highlighted that it had consulted experts and researchers to understand the rising concerns. In addition, Twitter also sought users’ opinions regarding the same.
The issues of deepfake media, especially videos, made headlines after Hollywood actor Jordan Peele created a video to show former US President Barack Obama making derogatory remarks about the current President Donald Trump, in April 2018. Peele had made the video to generate awareness about deepfake media and how it can be used.