Decoding The Right To Explanation In Artificial Intelligence

Decoding The Right To Explanation In Artificial Intelligence

SUMMARY

One of the most important policy developments to regulate the application of AI was included in GDPR in 2018

Much like the variety of internal combustion engines that exist today, AI models and algorithms are of different types with varying levels of complexity

When making decisions, AI does not attach meaning and categorize new information in the same way as humans

Artificial Intelligence, for most people, is a tech that powers chatbots or image recognition at best – basically, a software that tells images of cats from dogs. Others view it as a serious threat to their regular day jobs. Regardless of its impact on their lives, people view AI as a technology with tremendous future potential. While the future of AI elicits awe and fear, its impact on the present remains largely unacknowledged. From shortlisting resumes to spreading propaganda, AI is working harder on us than most of us know. The effects are significant and leaders around the world are fast waking up to it.

Batting for the regulatory framework at MIT’s AeroAstro Centennial Symposium, Elon Musk opined, “I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish. I mean with artificial intelligence we’re summoning the demon.

One of the most important policy developments to regulate the application of AI was included in GDPR in 2018. Article 22, under section 4 of GDPR, in essence, states that if your application to a job or a loan or citizenship gets rejected based on scores of automated intelligent processing software, you have a right to demand an explanation. Non-compliance could invite a fine of up to €20 Mn or 4% of the company’s global annual turnover.  The idea is to eliminate discriminatory behavior-predictions and stereotyping based on data. And that is Right to Explanation in a nutshell.

Why Is Right To Explanation Necessary?

The scores used for making predictions are based on the evaluation of several seemingly unrelated variables and their relationships with a set of algorithms. Without human intervention, the results can be erratic at times. Unchecked, these can set the stage for new-age stereotypes and fuel existing biases. While AI works with data, the data itself can breed bias failing even the most robust AI systems.

For example, rejection of a mortgage application by an AI-based system can have some unintended fallout. A self-learning algorithm, based on historical data, may match the age and zip code of the applicant to a bunch of people who defaulted on their loans in the last quarter. While doing so, it may overlook certain favourable criteria, like asset-quality, absent in the historical data.

Without a valid explanation, the rejection could invite legal action for stereotyping and discrimination, particularly, if the neighbourhood houses people mostly belonging to a minority group. Therefore, as a technology that has the potential to make decisions on behalf of humans, AI needs to deliver on ethics, fairness and justice in human interactions. At the bare minimum, it needs to satisfy the following types of justice:

  • Distributive – socially just allocation of resources, opportunities and rewards
  • The procedural – fair and transparent process to arrive at an outcome
  • Interactional – the process and outcome both need to treat the affected people with dignity and respect

Right to explanation closes this all-important loop of justice in the use of AI.

AI And Challenges To Right To Explanation

Much like the variety of internal combustion engines that exist today, AI models and algorithms are of different types with varying levels of complexity. The outcome of simpler models, like Linear Regression, is relatively easy to explain. The variables involved, their weightage and combinations to arrive at output score are known.

Complex algorithms such as deep learning, while striving for greater accuracy, act as a black box – what goes on within, stays within. With algorithms that self-learn and construct patterns, the cause of a certain outcome is difficult to explain, because:

  • The variables actually used by algorithms aren’t known
  • Importance/weight attached to the variables cannot be back-calculated
  • Several intermediate constructs and relationships between variables remain unknown

If the university admission processes were powered wholly by neural networks, it would have made the process opaquer than it is today. Denied a seat at a leading university, because their algorithm finds a certain “background” to be less of a right fit, you would be left wondering which part of your “background” worked against you. Even worse, the admissions committee would fail to explain it to you. In a state where social inequities abound, an opaque AI is the last thing that universities would ask for.

On the other hand, a completely transparent AI would make the algorithm vulnerable to being gamed and lead to the hijacking of the entire admission process. The right to explanation, therefore, is about AI achieving the right degree of translucency; it can neither be completely transparent nor opaque.

The Way Forward

When making decisions, AI does not attach meaning and categorize new information in the same way as humans. It reinforces the most common patterns and excludes cases that aren’t in the majority. One of the possible technical solutions being actively explored is around making AI explainable. Explainable AI (XAI) is indispensable in relevant high-risk and high-stake use cases, like medical diagnosis where trust is integral to the solution. Without enough transparency on its internal processing, Blackbox algorithms fail to offer the level of trust required for saving a life.

With fragility so entrenched into its fundamental architecture – both technological and statistical – AI needs regulation. As Sundar Pichai wrote in a Financial Times earlier this year, “Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it.

The legal framework regulating AI is evolving and in a state of flux in different parts of the world.

In India, with Right to Privacy taking center stage in national debate a few months ago, we are not far from a comprehensive law regulating AI taking shape. Notably, a discussion paper published by NITI Aayog in June 2018, broaches the subject in considerable detail. Over time, as the sphere of influence of AI expands, the laws, in response, will get more stringent to include more provisions.

As the technology unfolds and its new applications discovered, there is a need for self-regulation by the industry. Organizations need to proactively focus on implementing XAI that preserves the human nature of interactions which is based on trust and understanding. If nothing, it will prevent potentially life-changing innovations from being stifled by what could be well-intentioned protective laws. As with most things in life, the solution lies in striking the right balance.

Step up your startup journey with BHASKAR! From resources to networking, BHASKAR connects Indian innovators with everything they need to succeed. Join today to access a platform built for innovation, growth, and community.

Note: The views and opinions expressed are solely those of the author and does not necessarily reflect the views held by Inc42, its creators or employees. Inc42 is not responsible for the accuracy of any of the information supplied by guest bloggers.

You have reached your limit of free stories
Become An Inc42 Plus Member

Become a Startup Insider in 2024 with Inc42 Plus. Join our exclusive community of 10,000+ founders, investors & operators and stay ahead in India’s startup & business economy.

2 YEAR PLAN
₹19999
₹7999
₹333/Month
UNLOCK 60% OFF
Cancel Anytime
1 YEAR PLAN
₹9999
₹4999
₹416/Month
UNLOCK 50% OFF
Cancel Anytime
Already A Member?
Discover Startups & Business Models

Unleash your potential by exploring unlimited articles, trackers, and playbooks. Identify the hottest startup deals, supercharge your innovation projects, and stay updated with expert curation.

Decoding The Right To Explanation In Artificial Intelligence-Inc42 Media
How-To’s on Starting & Scaling Up

Empower yourself with comprehensive playbooks, expert analysis, and invaluable insights. Learn to validate ideas, acquire customers, secure funding, and navigate the journey to startup success.

Decoding The Right To Explanation In Artificial Intelligence-Inc42 Media
Identify Trends & New Markets

Access 75+ in-depth reports on frontier industries. Gain exclusive market intelligence, understand market landscapes, and decode emerging trends to make informed decisions.

Decoding The Right To Explanation In Artificial Intelligence-Inc42 Media
Track & Decode the Investment Landscape

Stay ahead with startup and funding trackers. Analyse investment strategies, profile successful investors, and keep track of upcoming funds, accelerators, and more.

Decoding The Right To Explanation In Artificial Intelligence-Inc42 Media
Decoding The Right To Explanation In Artificial Intelligence-Inc42 Media
You’re in Good company