Articles

Are fair algorithms a reality? Making Artificial Intelligence work better

Article content

Artificial intelligence (AI) is already part of the current public discussion in Estonia. Marten Kaevats, national digital advisor of the Estonian Government, has announced in September that institutions have started concrete talks on the legal framework regarding the implementation of Artificial Intelligence-based solutions.

One of the threats that have been mentioned is the possibility that algorithms ruling AI systems may occur in biases. As Kaevats underlined, “all these kinds of algorithms will be coming to our lives from various sides. Starting from the Facebook algorithm, which chooses which kind of content we have to see, to different smartphone devices that allow us to use services in a more user-friendly way. But this means that algorithms have certain biases, they decide upon the data that has been collected before on humans”.

In an ideal world, intelligent systems and their algorithms would be objective, cold and unbiased; but these systems are built by humans and, as data have shown, by the social, historical and political context in which they were created. Artificial Intelligence learning process is based on the data they receive. German philosopher Immanuel Kant was convinced that injustice was a state of nature: in The Metaphysical Elements of Justice, he wrote that even if we all were perfectly moral, there would be no justice anyway, because every one of us would stick to his or her own subjective interpretation of the reality. In this way, for example within the domain of self-driving vehicles, we believe that automatic cars will have no preference about life or death decisions between the driver and a random pedestrian. But again, computers are not blank pages, they make decisions depending on how they are trained. So which are the main kind of biases and for which reason they come out?

In this way, for example within the domain of self-driving vehicles, we believe that automatic cars will have no preference about life or death decisions between the driver and a random pedestrian. But again, computers are not blank pages, they make decisions depending on how they are trained. So which are the main kind of biases and for which reason they come out?

Interaction bias

The people could bias an algorithm by the way they interact or not interact with it. The algorithms are often “black boxed” meaning only the owner, and to a limited extent the purchaser, can see how the software makes decisions. From a governmental point of view, basic machine-learning techniques are already being used in the justice system, as in the case of Wisconsin State v. Loomis in the US when Enric Loomis was found guilty for his role in a drive-b. Mr. Loomis had to answer a series of questions that were then entered into Compas, a risk-assessment tool developed by a privately held company. The trial judge gave Loomis a long sentence partially because of the “high risk” score the defendant received from this black box algorithm. Despite Loomis challenged the sentence, because he was not allowed to assess the algorithm, the state Supreme Court ruled against him, explaining that knowledge of the algorithm’s output was a sufficient level of transparency.

Latent bias

The algorithm might incorrectly correlate concepts and words with gender, race, sexuality, income and any other kind of category. In the framework of job recruiting systems, algorithms tend, for example, to match the word “doctor” with men. This result is caused by the fact that if we put in any database research bar the word ‘’nurse“, most of the pictures would be with a woman. We won’t see an equal men and women distribution.

Selection bias

The data used to train the algorithm over-represents one population, making it operate better for them at the expense of others. This is the example of image recognition of the Artificial intelligence beauty contest ran by Youth Laboratories. If the algorithm is trained only on white people, then it will make them easier to win AI-judged beauty contests. The answer to this problem is better data. If the algorithms are shown a more diverse set of people, they’ll be better-equipped to recognize them later and give better outputs.

From a governmental perspective, Kaevats has remarked that “Each legal framework and cultural background of people who make these things [i.e. algorithms] have certain biases, but our goals are to diminish the possibility of bias and to enforce certain inclusive growth and prosperity in the legal framework we are thinking of”.

Contact

Visit us physically or virtually

We host impactful events both in our centre and online for government institutions, companies, and media. You’ll get an overview of e-Estonia’s best practices and build links to leading IT-service providers and state experts to support your digitalisation plans.

Questions? Have a chat with us.

E-mail:
Media:
Call us: +372 6273157 (Monday to Friday, 9:00-16:30 Estonian time)
Regarding e-Residency, visit their official webpage.

Find us

The Briefing Centre is conveniently located just a 2-minute drive from the airport and around 10- to 15-minute drive from the city centre.

You will find us on the ground floor of Valukoja 8, at the central entrance behind the statue of Mr Ernst Julius Öpik. We will meet the delegation at the building’s reception. Kindly note that a booking is required to visit us.

Valukoja 8
11415 Tallinn, Estonia