“Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned,” says a Facebook internal study on Adversarial Harmful Networks in India
Facebook MSI (meaningful social interactions) algorithms help promote violence-inciting content, alleges whistleblower Frances Haugen
While the Parliamentary Standing Committee on Information Technology may summon Facebook again in this regard, the Indian government may order a fresh inquiry, sources say
“We have yet to put forth a nomination for designation of this group given political sensitivities,” says the study.“There were [sic] a number of dehumanising posts comparing Muslims to pigs and dogs and misinformation claiming the Quran calls for men to rape their female family members. Our lack of Hindi and Bengali classifiers means much of this content is never flagged or actioned. And we have yet to put forth a nomination for designation of this group given political sensitivities,” the report said.“40% of the sampled top VPV (viewport views refers to the content visible in a computer window ) civic posters in West Bengal were fake/inauthentic,” says the report. There were coordinated authentic actors seeding and spreading civic content to propagate political narratives.“At the same time, this will also depend on the global findings and investigations, and that ultimately slows down the course of investigation in India,” he added.