At a time when social media platforms are flooded with hate speeches, fake news and misinformation, Twitter has now found a new way to aware users of such content on the online platform.
To tackle the menace of fabrication of media on the platform, Twitter, on Tuesday (February 4), said that it will apply a label ‘manipulated media’ to the tweet, which has an altered media attached to it. This label will be displayed as a warning to users before they retweet or like the tweet.
Twitter further said that it will also reduce the visibility of any such tweet on the platform. In some cases, the company might also prevent the tweet from being getting recommended on users’ feed.
On what all content will be categorised and filtered under the new feature, Twitter said that anything which has been substantially edited in a manner that fundamentally alters its composition, sequence, timing, or framing will come under scrutiny. “We will filter out any visual or auditory information which has new video frames, overdubbed audio, or modified subtitles that have been added or removed,” Twitter added.
We know that some Tweets include manipulated photos or videos that can cause people harm. Today we’re introducing a new rule and a label that will address this and give people more context around these Tweets pic.twitter.com/P1ThCsirZ4
— Twitter Safety (@TwitterSafety) February 4, 2020
For some of the tweets, Twitter will also provide background information to clarify users about the manipulation of the content associated with the tweet. “We’ll provide additional explanations or clarifications, as available, such as a landing page with more context,” Twitter said.
Before introducing this feature, Twitter took responses from around 6,500 people around the world on how the company can prevent the spread of manipulated content on the online platform. It also took a consultation from civil society groups and academic experts.
In the survey, 90% of the respondents said that Twitter should remove manipulated media from the platform. “More than 75% of people believe accounts that share misleading altered media should face enforcement action,” Twitter added.
What Is Facebook Doing?
In December 2019, Facebook also came up with a new solution that amalgamates both technology and human review to alert users about fake news in posts, videos or images shared by users.
To enable fact checks on news stories, Facebook said that it working with third-party fact-checkers platforms, who are certified through the non-partisan International Fact-Checking Network to help identify and review false news.
In the same month, Facebook-owned Instagram also opened up its third-party fact-checker programme beyond the United States. With this programme in place, Instagram is labelling all the posts that it perceives to be fake using an image matching technology. As of now, the company has launched this programme in 14 countries.
With the increase in hate speech and fake news in the country, the verification of social media accounts has been a major concern for both citizens and the government. Facebook had earlier said that it had taken down 2.19 Bn fake accounts in the first quarter of 2019, a significant rise from 1.2 Bn accounts compared to last year. It also removed 4 Mn hate speech posts.