Putting the “AI” in fAIrness

  1. Pre-processing → the data may be problematic for one of two reasons; (i) it is not representative of the population in question, in which case people who are facing the problem are necessarily excluded from the solutioning, and (ii) everyone that should be included is, but it represents the bias that exists in the real world.
  2. In-processing → the model that we are using, often a classifier, is problematic not because it inherently espouses discriminatory views but rather that the algorithm learns patterns that will amplify pre-existing biases that can be found in the data
  3. Post-processing → the results/predictions are troublesome; for instance, the Type I and II errors are disproportionately skewed towards a particular group of people.

Explainable/Interpretable AI

Bias Mitigation in AI

What AI-practioners can do

  1. Establish a framework and philosophy → this is important for understanding what it is wrong and how we want to go about solving it. It cannot be static, and it cannot be without widespread engagement.
  2. Contextualising the toolset → are we wanting to focus our efforts on interpretations or on bias mitigation- who is it for?
  3. Keeping abreast of the, social, theory → the sociology that underpins the society we as AI-practioners are trying to solve for is just as important as the algorithms we build and implement. It’s important to understand the social landscape we’re in, and also important to engage experts in the field of humanities.
  4. Communities → it’s important to be a part of communities doing meaningful work because the work is meaningful, of course, but also because it gives you a chance to commiserate, connect, and decompress with people that are safe and committed to similar work

--

--

--

Data scientist, aspirant AI ethicist interested in responsible AI-solutioning for human problems, big fan of cat videos

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

The Digital Speaker series: The Tech Journal: EP06 — Algorithmic Management

Ai Marketing reviews

Listening through the noise

The rise and rise of AI in Africa

Designing a service chatbot

Introducing Neil, the personal curator AI

The GPT-3 Model: What Does It Mean for Chatbots and Customer Service?

The GPT-3 Model: What Does It Mean for Chatbots and Customer Service?

Innovation in Personalized Medicine and Technology

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kerryn Gammie

Kerryn Gammie

Data scientist, aspirant AI ethicist interested in responsible AI-solutioning for human problems, big fan of cat videos

More from Medium

Using the Doctor-patient analogy to choose an approach that brings a return of investment to any…

What can’t AI do?

This is a picture of a face generated by a GAN on https://thispersondoesnotexist.com/

Can Machine Learning & Deep Learning Help To Detect Cancer

AI is getting better and better —  at being biased.