Facebook has published the first edition of its monthly compliance report on the new IT (Intermediary Guidelines) Rules, saying that it had removed 3.11 Lakh pieces of hate-speech content and 1.8 Mn pieces of content related to adult themes, nudity and sexual activity between 15 May and 15 June.
The social media giant also claims to have removed 75,000 pieces of content under the ‘Dangerous Organizations and Individuals: Organized Hate’ policy, 1,06,000 pieces of content under the ‘Dangerous Organizations and Individuals: Terrorist Propaganda’ policy and 1.18 Lakh pieces of content related to bullying and harassment in India from its platform and Instagram in the same period.
Facebook will publish its next report on July 15, containing details of user complaints received and action taken. “We expect to publish subsequent editions of the report with a lag of 30-45 days after the reporting period to allow sufficient time for data collection and validation. We will continue to bring more transparency to our work and include more information about our efforts in future reports,” it said.
On Instagram, the highest number of content actioned (699,000) was under the suicide and self-injury category, of which the proactive rate was 99.8%. This was followed by 668,000 pieces of “violent and graphic content”, while “adult nudity and sexual activity” led to action on 490,000 pieces of content.
Similarly, on June,30, 2021, Google released its transparency report in compliance with the IT rules, saying that it received a total of 27,762 complaints in April, while the number of removals stood at 59,350.
The tech giant was one of the first companies to have published its transparency report in compliance with the new Information Technology (IT) Rules, 2021 (Guidelines for Intermediaries and Digital Media Ethics Code).
Under the new IT rules, significant digital platforms (with over 5 Mn users) need to “publish periodic compliance report every month mentioning the details of complaints received and action taken thereon, and the number of specific communication links or parts of information that the intermediary has removed or disabled access to in pursuance of any proactive monitoring conducted by using automated tools or any other relevant information as may be specified”.