India’s Push To Label All AI-Generated Content: Who Bears The Burden?

India’s Push To Label All AI-Generated Content: Who Bears The Burden?

SUMMARY

The shift in India’s stance from advising to mandating disclosure is driven by the rapid rise and sophistication of deepfakes, and similar legislations in the EU and in parts of the US

Some in the startup ecosystem argue that the government’s push may unintentionally cast too wide a net and that every piece of content being scrutinised and reported

There are also suggestions that the accountability should ideally lie with the LLM platforms that have created GenAI models for video, audio and text content rather than end users and businesses

AI and deepfakes are blurring the line between reality and fiction. Many might believe they can identify what’s AI generated and what’s human effort, but even the keenest eyes can be fooled by the latest models such as OpenAI’s Sora or Google Veo.

Even Indian startups such as Invideo have made AI videos easily accessible for individuals and businesses around the world. 

With rapidly growing adoption, comes the risk of hoaxes, scams and reputational damage to individuals. 

It’s this blur between what’s real and what’s AI generated that India’s Ministry of Electronics and Information Technology (MeitY) wants to fix. On October 22, the IT ministry proposed amendments to the Information Technology Rules, 2021, calling for complete disclosure of AI generated or synthesised content 

Digital media platforms have to label such content as “synthetically generated information”. Social media giants and Indian startups have up to November 6 to review the advisory and send in their feedback. 

Under the proposed changes, platforms must “prominently mark” (at least 10% of the visual area or the initial audio) content as AI generated or synthesised. Large social media intermediaries like YouTube and Meta would need to adopt technical systems to detect and label such content automatically.

While this helps users to distinguish the real from the fake, some are worried that the ambit could stretch to every piece of content on the internet.

“It’s a good step to bring accountability to AI-generated content, but there must be some clarity on what is being labelled, why it’s being labelled and on what users can or cannot report. If every piece of AI-generated content is labelled or reported, it will harm the user experience and stifle creativity,” said Divya Agarwal, founder social media and growth marketing agency Bingelabs.

High Profile Deepfakes Spark Debate 

The move to tighten IT Rules and mandate AI labelling marks a shift in the government’s stance, which, until the 2025 draft amendment, focused on issuing multiple advisories to curb misinformation and raising awareness but did not impose strict legal mandates or obligations related to AI-generated content.

While the rapid rise and sophistication of deepfakes and AI content is definitely a big trigger, the Indian government also wants to align with the European Union and the state of California, where AI content labelling is already mandated. 

For instance, under Chapter IV, Article 50 of the EU’s Artificial Intelligence Act, the EU mandates that all AI-generated content be tagged via metadata, placing responsibility on social media platforms to identify and moderate potentially misleading or harmful material.

AI Regulations Around The World

Moreover, this can also be the result of precedence in cases like that filed by actor Aishwarya Rai Bachchan over deepfakes, in which the Delhi HC has issued interim injunctions in her favour. Rai Bachchan was pleading to protect her personality and publicity rights, and barring unauthorised use of her likeness in deepfake content. The court also protected the personality and publicity rights of actor Hrithik Roshan.

In terms of the larger social media platforms, YouTube already asks users if their uploads contain AI-generated content. YouTube CEO Neal Mohan recently announced the launch of a “Likeness Detection” tool for partner program creators. “It automatically finds AI matches of your facial likeness, allowing you to easily detect, manage, and request removal of the content.

Even Meta has started labelling AI-generated content on its own. 

Fine Line Between Protection And Policing

The draft amendments have already drawn sharp criticism from the Internet Freedom Foundation (IFF). The foundation argues that the government’s push for synthetic labelling may unintentionally cast too wide a net, forcing both creators and platforms into compelled speech and general monitoring

Because the definition of “synthetically generated information” includes any content “algorithmically created or modified in a manner that appears authentic or true”, it could cover everything from satire and remix videos to harmless filters. The problem isn’t whether platforms can label, it’s whether they can ensure accuracy without stifling the creative process.

The IFF warns that this breadth risks over-censorship, much like the disclaimers mandated in films and OTT platforms. “Such compelled speech could chill lawful expression,” the group said in its statement, adding that bad actors will simply ignore the rules. 

“Mandates are technically easy to evade and may lead to censorship. Visible labels can be cropped, blurred, or removed. For instance, metadata watermarks are routinely stripped during cross-platform reposting. Sophisticated actors can also spoof or forge provenance signals. Because compliance hinges on self-declaration and automated detection with non-trivial error rates, the burden falls largely on good-faith users and platforms, while determined offenders migrate to tools and channels with minimal oversight,” IFF founder and lawyer Apar Gupta told Inc42. 

The Cost Of AI Disclosures

Others have also called for a more robust reporting mechanism where high-risk synthetic content, like fake news, deepfakes or election-related material, should face strict labelling, while a lot of creative AI use (such as ad campaigns or artistic work) should not be over-regulated.

It’s not just a matter of free speech. It’s about technical feasibility and proportionality. As per IFF, Rule 4(1A) requires large platforms to verify whether users’ declarations about AI use are accurate. If they fail, they face a “deemed failure” standard- pressuring platforms into general monitoring and over removal to avoid liability.

“The draft also explicitly ties ‘synthetically generated information’ into other due diligence and traceability clauses, reinforcing privacy and encryption concerns,” said the IFF statement. 

Then there’s the cost. Deploying automated deepfake detection tools isn’t cheap, and accuracy can also be questioned. While Bingelabs’ Agarwal was unable to quantify the implementation cost, he did acknowledge the cost faced by platforms for AI detection. 

He suggested that the accountability ideally should lie with the LLM platforms that create such content.

For instance, if you generate something using OpenAI, Gemini, or any other LLM, the output could include a hidden code or identifier embedded within the transcript or metadata. When that content is uploaded to a platform like Meta, the system could automatically detect that it’s AI-generated.

“Since the content is already produced through platforms like OpenAI, Gemini or Anthropic, they wouldn’t face major additional costs,” Agarwal added. 

While the draft proposal didn’t mention any of these names, a Mint report cited a senior government officer as saying that OpenAI, Google, Anthropic and others that build the foundational AI models are also accountable under India’s proposed AI rules, and have been consulted. 

Will mandatory AI tagging be challenging for these LLM giants? Several of them are already dealing with privacy concerns in terms of their data monitoring and usage. Full disclosure could have some impact on AI adoption. 

If the draft goes through in the present state, it could mark India’s first serious step toward AI transparency. In the ideal-case scenario, these rules will create a culture of disclosure where AI-generated content carries visible labels and credibility. 

In the worst case, if there are too many complex compliance challenges, it could become a performative regulation, preventing AI use completely. But nobody is yet predicting this. 

The ecosystem is trending towards the optimistic side, despite concerns from citizens’ rights groups. 

With the US, Europe, Japan and other countries experimenting with AI-labelling norms, India’s challenge is unique in terms of its massive digital ecosystem with millions of users, lakhs of creators with vast linguistic diversity, making compliance a difficult task.

Note: We at Inc42 take our ethics very seriously. More information about it can be found here.

You have reached your limit of free stories
Join Us In Celebrating 5 Years Of Inc42 Plus!

Unlock special offers and join 10,000+ founders, investors & operators staying ahead in India’s startup economy.

2 YEAR PLAN
₹19999
₹5999
₹249/Month
UNLOCK 70% OFF
Cancel Anytime
1 YEAR PLAN
₹9999
₹3499
₹291/Month
UNLOCK 65% OFF
Cancel Anytime
Already A Member?
Discover Startups & Business Models

Unleash your potential by exploring unlimited articles, trackers, and playbooks. Identify the hottest startup deals, supercharge your innovation projects, and stay updated with expert curation.

India’s Push To Label All AI-Generated Content: Who Bears The Burden?-Inc42 Media
How-To’s on Starting & Scaling Up

Empower yourself with comprehensive playbooks, expert analysis, and invaluable insights. Learn to validate ideas, acquire customers, secure funding, and navigate the journey to startup success.

India’s Push To Label All AI-Generated Content: Who Bears The Burden?-Inc42 Media
Identify Trends & New Markets

Access 75+ in-depth reports on frontier industries. Gain exclusive market intelligence, understand market landscapes, and decode emerging trends to make informed decisions.

India’s Push To Label All AI-Generated Content: Who Bears The Burden?-Inc42 Media
Track & Decode the Investment Landscape

Stay ahead with startup and funding trackers. Analyse investment strategies, profile successful investors, and keep track of upcoming funds, accelerators, and more.

India’s Push To Label All AI-Generated Content: Who Bears The Burden?-Inc42 Media
India’s Push To Label All AI-Generated Content: Who Bears The Burden?-Inc42 Media
You’re in Good company