Article content
Estonia has started the public discussion on how to legalize the Artificial Intelligence (AI).
Last year in November when the self-driving cars task force convened that one of the main objectives was to define the legal framework on how to put self-driving vehicles in the streets.
Now the group of experts has acknowledged that working on traffic laws only is kind of unreasonable, because the question of AI is much wider: the scope is quite bigger than traffic laws, but self-driving cars are a quite useful way to communicate this issue to the society.
The question of AI is quite complex and wide. It might consist of financial bots doing deals on the Stock Exchange, smart refrigerators which might buy you some food, self-driving vehicles or a Siri that can buy flight tickets for your vacation.
The main aim of this regulation is to define liability of AI in a user-friendly way, so that average citizens on the street would actually understand, in case of an incident for example, who is exactly liable in that particular case.
The question of liability is way more complex than just pointing fingers at who is to blame in case of an incident. In many cases, AI is built in a way that for humans the decision-making process is intuitive. This means, from a legal perspective, that it is rather difficult to point fingers because even the creators of certain algorithms don’t exactly know why the algorithm made that particular decision at that particular point. So this type of black box algorithms are rather complex from a legal point of view, and to define liability is really important to have the public discussion now, because all these kind of algorithms will be coming to our lives from various sides. Starting from the Facebook algorithm which chooses which kind of content you see, to different smartphone devices that allow us to use services in a more user-friendly way. But this means that algorithms have certain biases, they decide upon the data that has been collected on you. So it is important to write laws in a way that people would understand.
What is the state of the debate right now in the Estonian public sector on the implementation of AI solutions?
We in Estonia are looking very intensively towards machines and deploying solutions also within the public administration. We have acknowledged that there are several ways through which this technology can make our systems more efficient: small cases within the domain of police work; a lot of legal work can be done automatically too; the simplest cases in the court can be done in an automatic way. We are not currently building any system like that, but we see a great potential. As the Estonian e-government experience has shown, there is a great potential in terms of efficiency and making the systems basically cheaper.
AI is the next step for e-governance and we are investigating into possibilities where we can use it, but the limits here are endless. Estonia is already working on proactive services, meaning that you can provide services so that citizens don’t have to interact with the government. We see that there are many places where these algorithms can help, and surely this is the next step of e-governance.
Could you make three possible examples trough which Artificial Intelligence can actually improve the quality of life for citizens?
For citizens’ everyday life the legal framework that we are now considering is the most simple way to give legal representative right to robots. If you have Siri on your iPhone, you can mandate Siri to buy and sell services on your behalf. Siri can buy the fight tickets for you. Another example can be the smart refrigerator. Within the domain of self-driving vehicles, while working you can send your car to buy food for you or work as Uber car service. There are a lot of possibilities through which this technology can make everyday life easier, but it is also important to understand that this [Estonia] is the first country in the word that is opening a public discussion from a legal perspective. We are trying to experiment and to see how this framework could actually work, and also provide vital content and contribution to the global discussion on legalizing AI.
I think the problem is imminent: we need to do it because the technology is already there, already around our everyday life, and many people just don’t notice it. Having a possible case study scenario for regulating AI, and not only being kind of tough and regulative and making borders to the AI: I think this is opening up a lot of new possibilities that our citizens and companies can benefit from.
If you combine this idea [AI] with the idea of e-Residency, for example, then the possibilities are again limitless: if you would have a financial bot that works in the Estonian legal framework, although doing deals globally, then this is something that can help out the possible investors to have legal certainty within this domain.
We are proposing 3 scenarios:
- The most radical one is giving separate legal subjectivity to the AI. Currently, the legal subjectivity is divided into two entities: the private citizen and the company. We are proposing the creation of a third one which would be the AI – but maybe this is too optimistic.
- The other proposal that we have is having a separate robotics act defining the limits and the rules within that framework.
- The third version we are proposing is changing basically what is “will” in legal meaning, and also making a separate robotics act. “Will”, in the case of the Estonian law, is a very simple thing. In the case of AI, the straightforwardness of this question would go much wider: if I give the mandate to my refrigerator to buy me some food, I would not define whether I want milk, diapers, dog food or cheese, but the decision on which kind of product exactly I would want is made by the algorithm. So the definition of “what will is” and “what I would want” is going much wider and more abstract in that case. We are not still sure which is the right way to go, but we are starting the public discussion so that the whole society would be involved in this. It is important to get everyone on board because is going to be a radical and big change in the legal framework that will affect the everyday life of all citizens.
The liability issue from a technical point of view is easy. [In those cases] Whether it is the producer’s liability or a human mistake, is always possible to point fingers, and it is also possible to go to any kind of insurance scheme or to establish state responsibility within the system. The matter of deciding is upon the public discussion which is actually going on, but the most difficult part of the liability issue is the emotional aspect: with the example of self-driving vehicles, if my child would be killed by a self-driving vehicle I want to see who is to blame and who goes to prison. The most difficult thing the society has to discuss is that, in these instances, maybe sometimes there is nobody to blame. We have to acknowledge this. The example for nowadays is the train accident: when someone is walking on the railway, the train has some speed and inertia, so it cannot stop. In the case of self-driving vehicles, it might be also speed and inertia and a reindeer running out of the forest at a fast speed so that incidents might happen. We have to emotionally go through this discussion so that everybody understands what will happen in these cases.
How could we deal in the future with bias created by already existing data?
It is important to understand the scope of this legal suggestion. We are not working with the ideas of superintelligence, we are working on a narrower and general AI. The current problem surrounding these AI issues is that algorithms have some bias built in it. During Trump and Brexit campaigns, the Facebook algorithms have basically created bubbles of different ideologies: this is an issue. This is the biggest problem for the AI now. How to diminish biases is a very difficult question. I am pretty sure that this is not fully possible: each legal framework and cultural background of people who make these things have certain biases, but our goals are to diminish the possibility of bias and to enforce certain [concepts of] inclusive growth and prosperity.