“We have built sophisticated machine learning systems to detect abusive behavior and ban suspicious accounts at registration, during messaging, and in response to user reports We remove over two million accounts per month for bulk or automated behavior – over 75% without a recent user report,” wrote the company.
Facebook-owned WhatsApp, like its parent company, has been under much flak for being a platform where misinformation is spread. The trend was at such high that phrases like ‘WhatsApp universities’ denoting reliance on misinformation propagated through the instant-messaging app, were a part of political discourse and campaigns, mostly by the left-leaning Indian National Congress.
Ahead of the Indian general elections, WhatsApp had placed several in-app restrictions to stop users from sharing bulk messages containing misleading information. However, a Reuters investigation found out that digital marketing firms paid $14 for a tool to bypass WhatsApp’s restrictions.
Fighting Fake News And Misinformation
WhatsApp had recently introduced a feature where it started marking messages as forwards, as to bring to spotlight the fast-spreading nature of these websites. Several groups have become popular on WhatsApp where political and social misinformation is spread widely, even going to the point of forming people’s informed opinions.
A recent Reuters Institute survey of English-language Indian internet users found that 52% of respondents of the survey got their news through WhatsApp. It gets darker — content shared via WhatsApp has led to murders. At least 31 people were killed in 2017 and 2018 with mob attacks fuelled by rumours on WhatsApp and social media to blame, a BBC analysis found.
Between December 2018 and January 2019, WhatsApp has banned over two million accounts per month for bulk or automated behavior and over 75% of those accounts did not have any recent user reports, says WhatsApp in w whitepaper which it had published.
“We use labels to mark the worst offenders and distinguish between their behaviors from those of regular users. We use the features and labels to teach our systems to better predict whether a user is likely to be banned in the future,” said WhatsApp.
During the recent general elections, WhatsApp had started releasing ads on TV making people aware of the nature of fake news and how they should be watchful and aware of the message they are trying to forward — “Share joy, not rumours,” the campaign said.
Considering the penetration of fake news, making sure that no misinformation is spread can be a cumbersome task for WhatsApp. It is a sensitive matter too, considering the company must not, at any cost, spy on messages sent out by its users. There has to be a mix of user awareness and AI-regulated fake news busters to combat the issue, something which the instant messaging giant backed by Facebook could look into doing.