e-estonia

Menu
icon-small-x

Are fair algorithms a reality? Making Artificial Intelligence work better

Artificial intelligence (AI) is already part of the current public discussion in Estonia. Marten Kaevats, national digital advisor of the Estonian Government, has announced in September that institutions have started concrete talks on the legal framework regarding the implementation of Artificial Intelligence-based solutions.

One of the threats that have been mentioned is the possibility that algorithms ruling AI systems may occur in biases. As Kaevats underlined, “all these kinds of algorithms will be coming to our lives from various sides. Starting from the Facebook algorithm, which chooses which kind of content we have to see, to different smartphone devices that allow us to use services in a more user-friendly way. But this means that algorithms have certain biases, they decide upon the data that has been collected before on humans”.

In an ideal world, intelligent systems and their algorithms would be objective, cold and unbiased; but these systems are built by humans and, as data have shown, by the social, historical and political context in which they were created. Artificial Intelligence learning process is based on the data they receive. German philosopher Immanuel Kant was convinced that injustice was a state of nature: in The Metaphysical Elements of Justice, he wrote that even if we all were perfectly moral, there would be no justice anyway, because every one of us would stick to his or her own subjective interpretation of the reality. In this way, for example within the domain of self-driving vehicles, we believe that automatic cars will have no preference about life or death decisions between the driver and a random pedestrian. But again, computers are not blank pages, they make decisions depending on how they are trained. So which are the main kind of biases and for which reason they come out?

In this way, for example within the domain of self-driving vehicles, we believe that automatic cars will have no preference about life or death decisions between the driver and a random pedestrian. But again, computers are not blank pages, they make decisions depending on how they are trained. So which are the main kind of biases and for which reason they come out?

Interaction bias

The people could bias an algorithm by the way they interact or not interact with it. The algorithms are often “black boxed” meaning only the owner, and to a limited extent the purchaser, can see how the software makes decisions. From a governmental point of view, basic machine-learning techniques are already being used in the justice system, as in the case of Wisconsin State v. Loomis in the US when Enric Loomis was found guilty for his role in a drive-b. Mr. Loomis had to answer a series of questions that were then entered into Compas, a risk-assessment tool developed by a privately held company. The trial judge gave Loomis a long sentence partially because of the “high risk” score the defendant received from this black box algorithm. Despite Loomis challenged the sentence, because he was not allowed to assess the algorithm, the state Supreme Court ruled against him, explaining that knowledge of the algorithm’s output was a sufficient level of transparency.

Latent bias

The algorithm might incorrectly correlate concepts and words with gender, race, sexuality, income and any other kind of category. In the framework of job recruiting systems, algorithms tend, for example, to match the word “doctor” with men. This result is caused by the fact that if we put in any database research bar the word ‘’nurse“, most of the pictures would be with a woman. We won’t see an equal men and women distribution.

Selection bias

The data used to train the algorithm over-represents one population, making it operate better for them at the expense of others. This is the example of image recognition of the Artificial intelligence beauty contest ran by Youth Laboratories. If the algorithm is trained only on white people, then it will make them easier to win AI-judged beauty contests. The answer to this problem is better data. If the algorithms are shown a more diverse set of people, they’ll be better-equipped to recognize them later and give better outputs.

From a governmental perspective, Kaevats has remarked that “Each legal framework and cultural background of people who make these things [i.e. algorithms] have certain biases, but our goals are to diminish the possibility of bias and to enforce certain inclusive growth and prosperity in the legal framework we are thinking of”.

read more on similar topic

Visit the e-Estonia Showroom

Visit the e-Estonia Showroom

The e-Estonia Showroom is an executive briefing centre. Our goal is to inspire global policy makers, political leaders, corporate executives, investors and international media to kick-off the digital transformation by sharing the successful example of e-Estonia and build links to the IT sector.

The e-Estonia Showroom has become a must-see destination and we host over 10,000 international decision-makers every year. Make sure to book your visit in advance using the booking form.

Give feedback