
Stakeholders said that existing legal frameworks are sufficient to address deepfake threat, adding that stronger enforcement and forensic capabilities were needed
Industry players also called for developing indigenous tools for detecting and analysing deepfakes in Indian languages and local contexts
The recommendations were part of a status report submitted by the electronics and IT ministry before the Delhi HC on March 24
Social media platforms and industry stakeholders have called for mandatory AI labelling standards and grievance redressal mechanisms to crack the whip on deepfakes.
The recommendations were part of a status report submitted by the electronics and information technology ministry (MeitY) before the Delhi High Court (HC) on Monday (March 24). The feedback came to the fore during stakeholder meetings conducted by a nine-member committee, which has been entrusted with studying the management and regulation of deepfakes.
The panel, which was formed in November 2024, held its first meeting at the end of December 2024.
As per minutes of the meetings, industry stakeholders rued the lack of a standard definition for deepfakes and called for a regulation around “mandatory Al content disclosure” with an emphasis on malicious actors rather than on creative uses of deepfakes.
“… there should be regulation around mandatory Al content disclosure, labelling standards, and grievance redressal mechanisms, while giving emphasis on malicious actors rather than benign or creative uses of deepfake technology,” read the minutes of the meeting, which took place on January 21.
The attendees, which included representatives from Google, Telegram, X and Meta, also flagged the growing misuse of deepfakes for perpetrating scams, targeting women during state elections. The stakeholders also called for the need for stricter enforcement rather than new laws.
What Else Did The Stakeholders Say?
- Technical challenges persist with regards to detecting deepfakes due to India’s diverse accents and audio-only content
- Lack of a standard definition for ‘deepfake’ further creates barriers for enforcement
- Sophisticated actors can find ways to circumvent detection mechanisms like watermarking and metadata tagging
- Existing legal frameworks are sufficient to address malicious deepfake use but require stronger enforcement and forensic capabilities
- Improve the capacity of investigative and enforcement agencies rather than introducing new regulations
- Over-reliance on intermediary liability frameworks for AI-generated content regulation is concerning
- Creators should disclose Al-generated synthetic content and provide appropriate labels
- Stakeholders also underlined the need for a constant dialogue to formulate regulatory norms and the need for creating standardised detection frameworks
- Public-private partnerships should address evolving technological challenges
- There should be regular capacity building and training programs for regional media and other stakeholders to improve awareness and response to deepfakes
- Large-scale campaigns to educate users on identifying and understanding deepfakes
- Need for developing indigenous datasets and tools for detecting and analysing deepfakes in Indian languages and local contexts
The meeting also saw digital intermediaries chime in with their perspectives and the steps taken to curb deepfakes. While Google said that it had mandated labelling policy for AI-generated content, Meta claimed that it was working on protecting celebrity personas from synthetic content.
Meanwhile, a representative of X, while underscoring that the social media platform has a “synthetic and manipulated media policy”, said that it is important to draw a distinction between deceptive and non-deceptive content.
“Content which are deceptive in nature are taken down. They (X) also stated that for certain posts to be labelled, the post should be extremely deceptive and harmful. They work within existing legal framework to take down content. As per X, not all Al content is deceptive in nature. It is important to draw that distinction going forward,” read the minutes of the meeting.
Meanwhile, the first meeting of the committee, on December 20, saw the members underlined suggestions necessary to formulate the report. The draft report will encompass aspects such as data protection laws, mandatory labelling of deepfakes, and guidelines for consent and content moderation.
In addition, the panel also underlined technical aspects such as AI detection tools, watermarking, and collaboration with technology firms as critical to curbing deepfakes.
This comes just days after the Delhi HC directed the panel to submit its report on the issue of deepfakes by July 21. It was at this hearing that MeitY submitted the status report.
The Bone Of Contention
MeitY formed the panel after the Delhi HC heard a petition filed by journalist and India TV editor-in-chief Rajat Sharma for regulation of deepfakes and blocking access to apps that enable the creation of such synthetic content.
In his plea, Sharma reportedly claimed that the proliferation of deepfakes poses a significant threat to various aspects of society, including misinformation and disinformation, adding that the technology undermines the integrity of public discourse and the democratic process.
Another petition, filed by lawyer Chaitanya Rohilla, also sought directions against unregulated use of AI.