Your browser is currently blocking notification.
Please follow this instruction to subscribe:
X
Notifications are already enabled.
X

Facebook India Under Fire Over Alleged BJP Links For Allowing Hate Speech

Facebook India Under Fire Over Alleged BJP Links For Allowing Hate Speech

Facebook did not remove an anti-minority post by BJP politician T Raja Singh to avoid potential backlash from the government, reports said

Facebook India’s head of public policy Ankhi Das allegedly warned staff members against removing such content

The parliamentary standing committee on information technology has decided to look into this matter after multiple reports

The parliamentary standing committee on information technology has decided to look into the allegations that Facebook does not take any action against the hateful comments posted by the ruling Bharatiya Janata Party (BJP) legislators to seek favors from the Indian government.  The committee is led by Kerala member of parliament and member of Indian National Congress, Shashi Tharoor.

The allegations were made in an article published on August 14 in the Wall Street Journal and noted that Raja Singh, a BJP politician from Telangana representing the Goshamahal constituency in Hyderabad, had put up a Facebook post that called Rohingya immigrants from Myanmar Muslim traitors and threatened to destroy mosques. Facebook took down the post, which was made in 2018, this Sunday (August 16, 2020).

Despite the post being in violation of Facebook’s hate speech rules and qualified as dangerous due to Singh’s off-platform activities, the company chose not to take any action against him till the report emerged. Citing the WSJ article, social activist Saket Gokhale had written to Congress leader Shashi Tharoor, who leads the parliamentary panel, on Saturday, August 15.

The report specifically names Facebook India’s head of public policy Ankhi Das, whose job description includes “lobbying India’s government on Facebook’s behalf”. Das has allegedly prevented takedown of hate speech content posted by members of the ruling BJP, even if it conflicted with the organisation’s global guidelines on hate speech.

Calling Out BJP, May Harm Facebook’s Prospects In India

According to some existing and former Facebook employees, Das reportedly told staff members that punishing the violations by leaders from BJP would damage the company’s business prospects in the country. Notably, India is the biggest global market for Facebook in terms of users, with 336 Mn active Facebook users and over 400 Mn WhatsApp users.

A Facebook spokesperson, Andy Stone, acknowledged that Das had raised concerns about the political fallout that would result from designating Singh a dangerous individual. But Stone also mentioned that her opposition wasn’t the sole factor in the company’s decision to let Singh remain on the platform. The spokesperson said Facebook is still considering whether a ban is warranted.

Facebook India maintained that it prohibits hate speech and content, and enforces its policies globally. A spokesperson said, “We prohibit hate speech and content that incites violence and we enforce these policies globally without regard to anyone’s political position or party affiliation. While we know there is more to do, we’re making progress on enforcement and conduct regular audits of our process to ensure fairness and accuracy.”

Meanwhile, Ankhi Das, Facebook’s top public-policy executive in India, has filed a complaint to Delhi police that some individuals online had “intentionally vilified” her due to their political affiliations and were engaging in abuse.
“I am extremely disturbed by the relentless harassment meted out to me,” Das said in her complaint.

Facebook’s Bigotry Not New To The Scanner

This isn’t the first time Facebook has been accused of promoting hate speech in India, especially against the minority community. Last year, a report by non-profit rights group Avaaz suggested that Facebook was letting many incidents scot-free, even as the company claimed to have taken action against  65% of the hate speech on its platform before anyone reported it.

Avaaz revealed that Facebook is letting anti-Muslim hate speech spread unchecked across Assam, where the minority community is already being dogged by the National Registrar of India (NRC) issue. According to Avaaz, it reported 200 such cases of hate speech to Facebook. However, the social media company has removed less than half of them for breaching its community standards. The report from the group further highlighted that these hate speech abuses often label Bengali Muslims as criminals and terrorists, among other derogatory terms.

Facebook is floating on the same boat in the US as well. In June 2020, dozens of Facebook employees staged a virtual walkout in protest of the company’s decision not to take action against incendiary posts by President Donald Trump had made in the last week of May. These controversial posts included one post that seemed to threaten violence against protestors by saying, “when the looting starts, the shooting starts.” Twitter determined that the same message violated its rules against the glorification of violence last week, limiting the ability to view, like, reply, and retweet the post on its platform.

Who Decides What Stays?

The latest development comes soon after LinkedIn permanently banned an angel investor Dr Aniruddha Malpani for criticizing edtech giant BYJU’S on his social media posts. Though Malpani has alleged that BYJU’s had a role to play in this blockade, but there is no proof against it.  A report by Medianama also alleged that even Twitter’s legal team has sent notices to users talking against BYJU’S, which noted that tweets critical of the company violated Indian law. The publication had reviewed four of these emails sent over the past few months which included off-the-cuff comments on its business and an allegation that it was promoting “fake news”.

Though all these cases are different in terms of its details and the depth of impact, but it raises questions on how social media platforms like LinkedIn, Twitter and Facebook decide on what passes the test and what does not. It also raises questions over what’s the redressal mechanism available on these platforms.