Unfortunately as it may seem, bias can creep in at many stages of a deep-learning process. But how does AI bias happen? AI is based on Big Data and the patterns it learns from it auyomatocally, in unsupervised ways. Machine learning has helped us overcome mechanization in favor of automation, but the price we are paying is that the computer science as we currently use it, is not designed to detect bias.

It may look just another consumer complaint, but Apple’s credit card is under investigation following complaints of algorithm bias. How did the complaint come about? Tech entrepreneur David Heinemeier Hansson exposed in Twitter that he had received 20 times Apple credit card limit than his wife. Soon afterwards, the New York Department of Financial Services launched a probe into Apple’s credit card [run by Goldman Sachs].

We’ll never know what Apple’s founder Steve Jobs would have thought about it but his engineering co-founder Steve Wozniak backed the complaint. Fintech has been an investor’s priority as banks worlwide have increasingly turned to machine-learning technology to save time and costs.

Today’s AI applications are based on deep learning. Whereas mechanization was historically based on rules-based computing (that is human writing ‘if-then-do’ type of code), the beauty of deep-learning algorithms is that as long as they have enough data sets, they are able to find the patterns in data by themselves. The problem is that if enough evidence exists in data to point one way, the results can be biased towards what the algorithm has learnt as ‘likely results’. As a consequence, people’s lives can be affected. Unknowingly, they can perpetuate law enforcement bias, or gender discrimination in employment. Such cases abound in the media as ‘AI failures’.

How AI bias happens

Is there any way to remedy these learning bias? If we want to be able to fix it, or at least avoid the worst results of unsupervised AI, we need to understand the internal mechanics of how the bias arises in the first place.

As mentioned above, the most frequent explanation for AI bias rests with training data which contains too many examples of a single trait and not enough variety. However, experts know the reality is more complex: bias can creep in long before the data is even made available to the algorithm. It can also happen at many other stages of the deep-learning process.

people wondering around data bias

Setting limits and goals

The first thing a computer scientist does at the time of creating a deep-learning model is to decide what wants to be achieved. An HR firm or law enforcement aid company, for example, might have as their goals to select the best candidates for a particular job or predict the likelihood of crimes being committed at a certain time of the day or night in certain city areas (these are real-life examples).

However, a candidate’s “fit” or the concept of “crime” may be rather ambiguous concepts in some cases. In order to translate those concepts into something tangible a computer can understand, the HR company or the law enforcement aid program have to decide whether they want to look at MBA-qualified personnel, people who have never had a break in their careers (motherhood is still a cause for some gap years in many women’s careers) or the physical attributes of those committing crimes in a certain area (age, gender, weight, height, race, etc.)

That set of criteria will be a first-off selection for potential employable candidates or certain areas or people detected by cameras. Within the context of our goal, we will select highly-qualified, hard-working people who have never had a break in their careers and certainly areas within a city with high crime rates and potential criminals. The problem is that “those decisions are made for various reasons other than fairness or discrimination,” explain Manuel Herranz and Mercedes Garcia, PangeaMT’s CEO and Chief Scientist. Both specialize in what they call “the fairness factor in machine learning”. If the algorithm finds out that males hardly ever take a break in their careers to raise a child, they will prioritize male over female candidates, and if an area has a high crime rate, and suspects have belonged historically to a particular social class, gender or race, it will prioritize police patrols in those neighborhoods although drug consumption and gun crime may be happening in other areas. Both companies will end up engaging in discriminating behavior even if it was never their intention.

Data collection – more bias

Data samples can add bias to the learning process typically in two ways: by not having enough variety or by having too much of a certain trait.

Let’s imagine we want to create a machine translation engine to translate general texts but we lack sports, medical, history, politics and dialogue vocabulary and expressions. It will suffer when having to translate medical reports or simple dialogues simply because it has never seen similar data.

Likewise, imagine a translation engine where datasets with vocabulary and expressions from those domains was 100 times more available than the rest. Faced with the task of translating a P&L sheet, the engine would certainly use sport, dialog, or political terminology when having to translate ‘expenses’, ‘entertainment’ or ‘personnel’. Clearly, the same would happen if we only feed pictures of certain types of trees or plants and ask the engine to recognize others, or just male or female ¹candidates. The algorithm will learn only a small representation of a wider spectrum or just one trait.

Data preparation

Finally, bias can be introduced during the data preparation stage, that is, when we select the attributes we want the algorithm to take into consideration. (This should not to be confused with the setting limits and goals stage. You can use the same attributes to train a model for very different purposes and goals and also use very different attributes to train a model for the same goal.)

Can we fix these AI bias? The answer is that we [and many scientists] are working on it….and we will disclose more next week.