Twitter India received 867 complaints from August 26 to September 25, 2021 period, out of which the platform took action against 339 URLs, the social media giant said in its compliance report.
Twitter India also processed 65 grievances for Twitter account suspensions.
“We overturned 10 of the account suspensions based on the specifics of the situation, but the other accounts remain suspended. Twitter received 44 requests related to general questions about Twitter accounts during this reporting period,” Twitter India said.
The report’s ‘Proactive Monitoring Data’ showed that globally, it suspended 25,500 accounts for child sexual exploitation, non-consensual nudity, and similar content and 4,790 accounts for the promotion of terrorism.
After initial resistance and friction with the Central government over compliance to the Intermediary Guidelines and Digital Media Ethics Code, Rules 2021, which came into effect on May 25, the social media giant complied to norms in July.
In August, the central government informed the Delhi High Court that the social media platform seems to have complied with the IT guidelines.
As per new IT guidelines, social media intermediaries with or more than 5 Mn users in India have to comply with the guidelines. Apart from appointing a chief grievance officer, chief compliance officer and a nodal contact person, the social media companies, upon receiving a court order or instructions received from any government agency, shall not host unlawful content in the interest of the
If such a content piece is already live on the platform. In this case, the social media intermediaries have to remove it from the platform within 36 hours of receiving the court order or instruction from the government agency.
Last month, the centre submitted a short affidavit stating that the microblogging site has acknowledged that the personnel (chief compliance officer, Nodal Contact Person and regional grievance person) are appointed as the company’s permanent employees and not as ‘contingent workers’.
In a bid to provide extra safety to its users facing harassment globally, including India, Twitter announced the testing of a new feature called ‘Safety Mode in September’. The company said that the latest feature addition to its platform would temporarily block accounts for seven days for using potentially harmful language such as insults or hateful remarks.
Twitter’s Indian rival, Koo, founded in March 2020, in its compliance report for the month of September said that it removed 3,623 posts during the month — 2,101 based on content reporting by users, 1,458 based on proactive moderation, 15 spam content based on user reporting and 49 spam posts or accounts based on proactive moderation.
Further, it proactively took actions such as overlay, blurring, ignoring, warning and more against 2,885 posts based on user reports and 11,054 posts through self-moderation.