Putting the “AI” in fAIrness

Kerryn Gammie
6 min readApr 20, 2021

As the world’s data grows, so does its propensity for AI-solutioning. This, like many tech-things, offers the promise of hope, growth, and excitement. This well-intentioned inclination does not shield AI-solutions from the ills that come with doing anything human; the possibility for harm, intentional or otherwise. Before we get into the consequences of AI, let’s spend some time understanding what we’re talking about.

What is AI? Generally (that is, philosophically), it is a computer/machine’s proximity to being human. There is a lot to unpack here, so we won’t right now, but instead let’s agree with the definition and add that AI is a school of thought that uses technology, maths, and stats to harness the power of data by creating insights and making decisions.

What is the problem with AI-solutioning? Through a specifically human, and social lens, it is not directly interpretable, and it has the ability to propagate harmful social bias at scale. If we are not able to understand how a machine works, how it comes to think, we cannot begin to understand how it makes decisions about us, and for us. Our tendency as humans to shift the responsibility of decision-making to de facto versions of authority is not new, but the brand of authority is.

One doesn’t need to exhaust the search engine to find examples of AI technology being discriminatory; from cases of racism in facial recognition, to sexist systems, it is a wonder Viola Davis hasn’t been cast to play the lead in our communal, and proverbial biopic. Before we can get into problem solving, though, we need to do some problem stating- bias can occur at each of the three main stages of modeling:

  1. Pre-processing → the data may be problematic for one of two reasons; (i) it is not representative of the population in question, in which case people who are facing the problem are necessarily excluded from the solutioning, and (ii) everyone that should be included is, but it represents the bias that exists in the real world.
  2. In-processing → the model that we are using, often a classifier, is problematic not because it inherently espouses discriminatory views but rather that the algorithm learns patterns that will amplify pre-existing biases that can be found in the data
  3. Post-processing → the results/predictions are troublesome; for instance, the Type I and II errors are disproportionately skewed towards a particular group of people.

If bias occurs at at least one of these stages, it will grow, be deployed into the real world where it will be used to make decisions that may likely affect the data that it will continue to learn from thereby exacerbating the issues.

Whew. Bleak. But now that we’ve defined the problem, we can begin by defining the solution, in part. I want to start by saying that fairness while important is not easy; it is not easy to define and understand, and it is not easy to solve for. Unfairness is the result of years of missteps and calculated harms- unfairness will not be undone by a few algorithms, even the fancy ones. Fairness requires systemic change, interdisciplinary skill sets and skill-sharing, collaboration, and stamina. With that in mind, let’s take a deep breath and start by trying to understand fairness. I posit two definitions:

Philosophically: Donning the “veil of ignorance”, or as Professor John Rawls suggests, what system one would consider fair if one didn’t know whether one was part of the discriminated or discriminating group. VERY loosely: how to treat everyone equally, without discrimination.

Algorithmically: Similarly distributed errors across protected groups.

This post will not focus on the philosophy of fairness- however, I want to emphasise its necessity and gravity in coming to identify, create, and disseminate tools for algorithmic fairness in AI systems. Instead, I will spend some time talking about something a lot more simple, though still non-trivial: algorithmic fairness. If you, like me, have succumbed to the numbing distraction of social media- LinkedIn and Twitter- you will likely have come across posts that express outrage at big tech companies and their AI solutions, and how machines are going to kill us. Both feel like an episode of Black Mirror, minus the great cinematography. You’ve likely also encountered posts that talk about fairness and how to do AI fairly using some set of tools- this usually falls into two distinct toolboxes: explainability/interpretability and bias mitigation. The former is concerned with making black box algorithms interpretable for humans, while the latter is concerned with preventing the black box from harming protected attributes. I want to focus on the latter, but will briefly outline the mechanics behind the former.

Explainable/Interpretable AI

Goal: look inside the black-box to see what it’s doing.

One such way of doing this by using a simple, often linear, model to describe a specific instance of the data and use that as an approximation for the more complex AI model. If you randomly perturb (change) the instances and have the black-box model make its predictions, you can assess the patterns of this to then understand how the model works with each of the variables. This then enables you to see which variables are especially predictive and acting as the defining features. Tools like LIME make use of this methodology.

Bias Mitigation in AI

Goal: achieve a small level of discrimination (unfairness) while achieving a high level of accuracy in the predictions.

IBM’s AIF360 has done excellent work in terms of conducting research and packaging that research into usable libraries (in both Python and R) for developers. I encourage everyone to go through their web demo to get an intuition behind their de-baising algorithms and how to go about deciding the most appropriate one given the context.

If you are anything like me and spend too much time doing Buzzfeed quizzes, then the Aequitas fairness tree is a good starting point for figuring out the route you could pursue given your objectives.

What AI-practioners can do

Now that we’ve gotten a sense of what the algorithmic landscape looks like, we can consider what that actually means for AI-practioners. Again, it needs to be made clear that the responsibility of safe and reliable AI tech is a shared responsibility- it is why ethics need to be a part of an organisational culture. Here’s a non-exhaustive list of things we can do:

  1. Establish a framework and philosophy → this is important for understanding what it is wrong and how we want to go about solving it. It cannot be static, and it cannot be without widespread engagement.
  2. Contextualising the toolset → are we wanting to focus our efforts on interpretations or on bias mitigation- who is it for?
  3. Keeping abreast of the, social, theory → the sociology that underpins the society we as AI-practioners are trying to solve for is just as important as the algorithms we build and implement. It’s important to understand the social landscape we’re in, and also important to engage experts in the field of humanities.
  4. Communities → it’s important to be a part of communities doing meaningful work because the work is meaningful, of course, but also because it gives you a chance to commiserate, connect, and decompress with people that are safe and committed to similar work

Originally published via Finchatbot

--

--

Kerryn Gammie

Data scientist, aspirant AI ethicist interested in responsible AI-solutioning for human problems, big fan of cat videos