November 25, 2021

The problem of machine bias

Est. Read time Min.

A few weeks back I wrote about machine bias. I went over how machine learning takes place and the three ways bias can be introduced into algorithms and artificial intelligences. But even after all of that, we might still be tempted to think that machine bias is not really a big problem. So what if predictive text assumes my pilot is a man and my nurse is a woman?

Cathy O’Neil, a mathematician and the author of the book Weapons of Math Destruction, highlights the risk of machine bias in many contexts. People are often too willing to trust in mathematical models because they assume machines are immune to human biases, so they are not held to the same standards. What many scientists and policymakers find concerning is the way these algorithms are being used in more and more relevant areas such as medicine, human resources and even the penal system.

“I want a machine-learning algorithm to learn what tumors looked like in the past, and I want it to become biased toward selecting those kind of tumors in the future, but I don’t want a machine-learning algorithm to learn what successful engineers and doctors looked like in the past and then become biased toward selecting those kinds of people when sorting and ranking resumes.”

Shannon Vallor, Santa Clara University

Machine learning programs are bound to encounter historical patterns that reflect racial or gender bias. It can be difficult to tell what’s bias and what’s just a fact of life. People who use these programs should at least be aware that they’re going to be biased, and shouldn’t assume that they are completely objective.

What can we do about it 

For starters t’s important we talk about the problem because this is something that algorithms can fix themselves. The solution needs to happen within the organizations and people who build the AI, at a societal and legal level, and at a consumer level.

In terms of the organizations that are designing and developing these algorithms, a move towards diversity and inclusion is imminent: given that it is mostly males who create algorithms, the bias found in algorithms is heavily weighted toward men. Furthermore, I would venture to argue that hiring algorithm coders in the future will require them to be tested for bias before they even begin, and new positions for “bias checkers” or “algorithm testers” who examine the output of algorithms will need to be created. That way they might be able to predict and correct the biases that might appear in their data. 

In order to prevent the machine learning system from exploiting known biases, developers could pre-process the training data in order to explicitly remove them. They could also explicitly test the output of machine learning systems against known biases and then post-process the results to remove them. But the consequences of these decisions could be detrimental in a whole new way, this is way more research into the extent of the problem and potential solutions is needed.

“The best practices of how to combat bias in AI is still being worked out. It requires a long-term research agenda for computer scientists, ethicist, sociologists, and psychologists,”

Aylin Caliskan, Princeton Computer Scientist

Another important point towards solving this problem is the role played by policymakers and watchdog organizations who advocate and legislate for more transparency in algorithmic systems. There is still a lot of misunderstanding in the general public and even within the people who design these AIs about how exactly they work. Right now, we can increase transparency by publishing disclaimers and using simple language so people can truly understand the impact of these algorithms.

From a personal perspective we need to become more aware of the impact these algorithms have on our lives, and be more skeptical of the results we receive from machines. We need to ask ourselves, why are we seeing the results we are seeing, what are the underlying mechanisms at play and what part of the story we are missing. The question isn’t simply a matter of what Facebook posts we see every day. For many of us, it concerns which job we get, how much we make, and whether we have freedom. I think those decisions shouldn’t be left to machines. 

It’s unlikely we will be able to create fully objective algorithms. As long as humans are involved in the process, some bias is inevitable. But it is important to remember that AI learns about how the world has been. It’s a great tool to tell us about our past and how the world has been, but it doesn’t know how the world should be. That is for us to decide.