How MeitY’s Draft AI Guidelines Ensure A Transparent Route Towards Accountable Innovation

How MeitY’s Draft AI Guidelines Ensure A Transparent Route Towards Accountable Innovation

SUMMARY

Under the IndiaAI Mission, the Centre has formed an advisory group, chaired by the principal scientific advisor, to undertake the development of an ‘AI for India-Specific Regulatory Framework'

MeitY’s AI governance guidelines have also paid special focus on the perils of training AI models on copyrighted data, which is a major issue that the world is currently grappling with

As the country tries to balance innovation and regulation, these guidelines make for the essential initial first steps in ensuring that AI’s transformative potential is harnessed responsibly and sustainably

India’s AI ecosystem has undergone a major shift in the last few years, especially with the advent of GenAI. The rapid evolution of this space has not only sparked innovation across industries but also raised pressing concerns about the ethical implications of the use of AI.  

Therefore, the Ministry of Electronics and Information Technology (MeitY) recently introduced AI governance guidelines as part of its ambitious INR 10,371.92 Cr IndiaAI Mission. 

Under the IndiaAI Mission, the Centre has formed an advisory group, chaired by the principal scientific advisor, to undertake the development of an ‘AI for India-Specific Regulatory Framework’. 

Under the groups’ guidance, a subcommittee on ‘AI Governance and Guidelines Development’ has been asked to provide recommendations for AI governance in India. 

Currently open for consultation, the report has called for transparency, accountability, deployment of a “fair and inclusive” AI system, and the importance of human oversight, among others.

Meanwhile, industry stakeholders Inc42 spoke with seemed cautiously optimistic. While some worry that impending regulations could stifle growth, many see these guidelines as essential for fostering innovation.

Now, before we delve deeper, let’s take a look at the recommendations:

  • MeitY and the principal scientific adviser have been tasked to establish a mechanism to coordinate AI governance across sectors.
  • A technical secretariat is to be established to serve as a technical advisory body and coordination focal point.
  • The technical secretariat should also establish, house, and operate an AI incident database as a repository of problems experienced in the real world that should guide responses to mitigate or avoid repeated bad outcomes.
  • To enhance transparency and governance across the AI ecosystem, the technical secretariat should engage the industry to drive voluntary commitments on transparency across the overall AI ecosystem.
  • The technical secretariat should examine the suitability of technological measures to address AI-related risks.
  • Recommendations are in place to form a sub-group to work with MeitY to suggest specific measures that may be considered under the proposed legislation like the Digital India Act (DIA) to strengthen and harmonise the legal framework.

What’s The Need?

In the AI governance guidelines report, MeitY said, “AI refers to a range of technologies which can be used for both harm and good. Governing the use of AI is driven by the need to minimise risks and harms.” 

The report also touches upon the challenges associated with governing AI. For instance, it notes that it can be difficult to understand how different components within an AI system interact with each other and which specific component is responsible for any potential harm caused. 

Building upon the various AI governance guidelines designed and suggested in India since 2016 by agencies, including NITI Aayog and Nasscom, MeitY has further proposed a list of AI governance principles in the report. 

These principles have stressed the need for transparency, accountability, fairness and non-discrimination, and inclusive and sustainable innovation, among others.

Jaspreet Bindra, CEO of AI&Beyond, sees these guidelines as a framework for enterprises to align with ethical AI practices while navigating India’s socio-economic context. However, the absence of robust laws leaves challenges like deepfakes unaddressed.

Last year’s general elections highlighted these risks, with AI-generated content spreading misinformation on social media. The report stresses the need for “digital by design” governance to modernise systems and tackle such issues effectively.

Harsh Walia, partner at Khaitan & Co, notes that the two key expectations of the report are implementing tools to enhance accountability and traceability.

Walia said that these recommendations largely align with existing industry best practices, particularly for entities that already prioritise transparency, safety, and accountability in their AI-related operations. 

As a result, these guidelines are not expected to drastically change the internal policies of these entities, but they highlight the importance of adopting proactive, ethical governance measures that align with India’s evolving AI ecosystem.

He added that the techno-legal approach, as highlighted by the government, offers advantages, including scalability and efficiency, enhanced monitoring capabilities, and risk mitigation.

Notably, India has already seen success with this approach in various sectors. For instance, SEBI employs AI tools for data analysis to improve surveillance and combat money laundering practices. Additionally, telecom service providers use AI-powered filters to detect and block spam calls and messages, protecting consumers from fraudulent activities. 

“Integrating governance technology tools, as envisioned in the report, ensures that governance frameworks are not only robust but also adaptable to emerging challenges,” Walia added.

A Sharp Focus On Copyright Protection

MeitY’s AI governance guidelines have also paid special focus on the perils of training AI models on copyrighted data, which is a major issue that the world is currently grappling with.

It is important to note that in recent years, multiple litigations have been filed by content-producing and publishing companies against top AI companies for training their models on copyrighted works. 

In 2023, the New York Times filed a copyright infringement lawsuit against Microsoft and OpenAI in the US. The bone of contention for the leading global news organisation was OpenAI’s “unlawful use” of its work to create artificial intelligence products. 

Similarly, Getty Images started legal proceedings against Stability AI last year. In addition, comedian Sarah Silverman and a few other authors sued Meta and OpenAI for using their books without permission to train AI models.

According to IndiaAI Mission’s latest report, “Given that the copyright law grants the copyright holder an exclusive right to store, copy etc., creation of datasets using copyrighted works for training foundation models, without the approval of the right holder can lead to infringement.”

However, the report also points out that since copyright protection requires ‘human authorship,’ it is unclear whether works created using foundation models are eligible for copyright under current laws.

“By proactively creating appropriate guidance, the relevant authorities (Copyright Office and the Ministry of Commerce & Industry) can provide certainty and clarity to the users as well as to other government authorities who may otherwise adopt inconsistent practices. A consultation of what would be appropriate guidance to clarify whether and to what extent creative works generated by using foundation models can be eligible for copyright protection might be useful,” the report said.

The Challenges Ahead

From identifying the issues in each stakeholder to keeping the regulations relevant in a fast-evolving ecosystem like AI, the challenges to implementing a regulatory framework around this technology are multifaceted. However, the existing IT laws can help to set a few fundamental grounds to fight against the malicious sides of the tech.

Per the report, there are existing legal safeguards/instruments to protect against the misuse of foundation models for creating malicious synthetic media. 

Depending upon the context and negative effect of the malicious synthetic media in question, multiple laws such as the IT Act, Indian Penal Code (IPC), Prevention of Children from Sexual Offences Act, 2012, Juvenile Justice (Care and Protection of Children) Act, 2015, Digital Personal Data Protection Act (DPDPA), and others can apply.

Khaitan & Co’s Walia said that one of the primary challenges in developing and deploying AI is the risk of compromising user privacy and data confidentiality, which DPDPA addressed by implementing a consent-based model for personal data processing. 

The act’s technology-neutral design makes it applicable to all emerging technologies, including AI-driven innovations, he said.

However, Walia said, “While the report acknowledges that the existing legal framework is adequate, it does not specify the measures needed to enhance its implementation. From a business perspective, although self-regulation is appreciated, businesses would benefit from clearer guidance on the practical application of regulatory norms.”

Meanwhile, the founder and CEO of GenAI startup Avaamo, Ram Menon, believes that while the initiatives by the working committee of the Indian government are noble, trying to mandate how a fast-moving technology like AI is built, created, and implemented is the wrong approach. 

“The guidelines could prove obsolete in a couple of months. The public, however, could be well served if the government focussed on how AI will impact consumers and the public,” Menon added.

He has suggested a more nuanced approach, for instance, how to handle bias in AI models when disbursing loans or other financial products. According to him, the focus should be on controlling and normalising outcomes that affect consumers due to the deployment of AI technology rather than over-regulating its development processes.

Come that as it may, as the country tries to balance innovation and regulation, these guidelines make for the essential initial first steps in ensuring that AI’s transformative potential is harnessed responsibly and sustainably.

[Edited By Shishir Parasher]

Note: We at Inc42 take our ethics very seriously. More information about it can be found here.

You have reached your limit of free stories
Become A Startup Insider With Inc42 Plus

Join our exclusive community of 10,000+ founders, investors & operators and stay ahead in india's startup & business economy.

2 YEAR PLAN
₹19999
₹7999
₹333/Month
UNLOCK 60% OFF
Cancel Anytime
1 YEAR PLAN
₹9999
₹4999
₹416/Month
UNLOCK 50% OFF
Cancel Anytime
Already A Member?
Discover Startups & Business Models

Unleash your potential by exploring unlimited articles, trackers, and playbooks. Identify the hottest startup deals, supercharge your innovation projects, and stay updated with expert curation.

How MeitY’s Draft AI Guidelines Ensure A Transparent Route Towards Accountable Innovation-Inc42 Media
How-To’s on Starting & Scaling Up

Empower yourself with comprehensive playbooks, expert analysis, and invaluable insights. Learn to validate ideas, acquire customers, secure funding, and navigate the journey to startup success.

How MeitY’s Draft AI Guidelines Ensure A Transparent Route Towards Accountable Innovation-Inc42 Media
Identify Trends & New Markets

Access 75+ in-depth reports on frontier industries. Gain exclusive market intelligence, understand market landscapes, and decode emerging trends to make informed decisions.

How MeitY’s Draft AI Guidelines Ensure A Transparent Route Towards Accountable Innovation-Inc42 Media
Track & Decode the Investment Landscape

Stay ahead with startup and funding trackers. Analyse investment strategies, profile successful investors, and keep track of upcoming funds, accelerators, and more.

How MeitY’s Draft AI Guidelines Ensure A Transparent Route Towards Accountable Innovation-Inc42 Media
How MeitY’s Draft AI Guidelines Ensure A Transparent Route Towards Accountable Innovation-Inc42 Media
You’re in Good company