A framework to Audit AI for trust, ethics, and bias

Disclaimer: This is a personal blog. Any views or opinions represented in this blog are personal and belong solely to the blog owner and do not represent those of people, institutions or organizations that the owner may or may not be associated with in professional or personal capacity, unless explicitly stated.

Note: I can talk about this topic all day. But I have kept the scope to the highest level to ensure that the ideas in this article are easier to grasp. I will be posting more articles that delve into details on how organizations can audit for trust, ethics, & bias.

Artificial intelligence (AI) requires humans to set up rules that will be coded by programmers. These rules will dictate how the various AI systems will operate within society and drive value for organizations. Many people tend to think AI is very sophisticated and beyond comprehension, and they throw buzzwords around such as “machine learning,” “natural language processing (NLP),” and “deep learning.” Let me try to explain the most popular AI systems:

  • Machine learning relies on programmers coding the rules of a process within a system so that the machine can execute these rules systematically, along with learning new things along the way. The learnings of such a program can be endless if proper constraints aren’t set.
  • Neural networks figure out the rules themselves like an infant would as they get older and are exposed to the environment around them. The way neural networks do this is called “deep learning.”
  • NLP, Image & speech recognition takes the world around us, which is messy and unstructured, and feed this data to a computer with a lot of dimensions. The computer will then try to see patterns that we could not easily see, or try to gather understanding of events that was not explainable and based on the data and dimensions programmed will provide decisions in real- time, and make predictions about the future.

With all that said, I’d like to get to the point of my post. AI is doing amazing things, and compelling use cases are being identified. However, using AI requires trust. Trust is essential when it comes to technology making decisions that have an impact on people’s lives. And with technology being given this power to make decisions, companies and governments need to demonstrate that their technology is free from their bias and has good ethics embedded in the technology.

It is challenging to demonstrate trust, ethics, and bias when it comes to using the various types of AI, but we will have to start somewhere. Over the last year, there have been numerous reports and fines related to the use of AI, such as:

Based on the growing number of incidents, it has become imperative that leaders on boards of major organizations be able to provide answers to regulators, customers, and the general public on the following questions:

  • What is your company using AI for, and what was it optimized to do? Leaders need to be able to clearly explain what their AI is optimized for when it comes to AI effectiveness. AI that is used to grant credit card approval, for example, along with setting interest rates for clients, will be optimized to ensure that the company manages their risk effectively by offering proper credit limits and interest rates to individuals who are considered low-risk. This sounds fine on the surface, but if we were to take a closer look, the optimization factor would provide a higher credit limit and favorable interest rates to wealthy individuals because they have multiple assets, low-risk ratings, and wider access to credit. On the other hand, individuals considered vulnerable (e.g., immigrants, women, people of color, low-income individuals) will receive higher interest rates, lower credit limits, or a credit card denial due to the fact that they do not have wide access to credit. This would demonstrate bias in the way the AI was programmed and the variables used to favor applicants of a higher social class and an ethical problem where you are discriminating against a certain class of applicant, gender and social standing, along with the values of your company (diversity, inclusion, etc.), which would diminish trust.. Organizations need to clearly understand what their algorithms are optimized for and be ready to accept the tradeoffs.
  • What were your tradeoffs?  If you designed an algorithm like the one above, what did you sacrifice to ensure the AI worked at 100% efficiency? Did you sacrifice customer privacy by feeding the AI system with numerous variables pertaining to the user across multiple product lines? Was the data source reliable, and can the lineage be traced? An AI system has no human emotions and will make decisions strictly based on the logic programmed. Additionally, the scenarios of your AI system will need to be understood as it relates to the variables not considered during the programmining and the tradeoffs you are willing to accept should those scenarios occur.
  • Were there any biases, and how can you demonstrate this? Bias is everywhere, and we all have them. Behavioral economists have confirmed that humans are susceptible to several cognitive biases. If you take the time to observe the countless instances of irrational human behavior all around you, you will realize that these biases are everywhere. Leaders should be ready to explain how their system is free from bias and whether the AI systems’ decision variables were challenged, and assessed, by independent parties for bias, and the conformity to design.

A framework to audit AI

Based on some research I conducted, I have developed a high-level framework on how to audit AI for trust, ethics, and bias. This framework is meant to serve as a starting point and can be adapted as needed.

Developed and created by Lorenzo Nagreadie

In short, using AI has a lot of merit and will allow people to do more value-added work, but AI will only succeed on a foundation of trust. With the framework above it provides leaders an approach on how they can go about understanding what their exposure is as it relates to AI and their use of them. The time has come where leaders in major organizations need to be aware of:

  1. The exposure if their AI algorithms fail and obligations to fulfill their duties?
  2. The use of their AI algorithm is legal, ethical, and responsible?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: