Article content
Governments in Europe and beyond are interested in deploying artificial intelligence. However, the uptake of AI by the public sector is likely to be more gradual and requires the deployment of rules and regulations that have not constrained AI’s rapid recent adoption in the private sector.
At least these were the take-home messages that I came away with following a panel discussion at the Tallinn Digital Summit. The session was entitled “AI’s Impact on the Policy Making and State Governance: Future Challenges and Opportunities,” it was one of the final panels of the one-day summit held at Tallinn’s Kultuurikatel or Creative Hub.
The summit featured talks by some rather impressive figures, including sitting prime ministers, former presidents, high-ranking EU officials, and representatives of digitisation efforts from Japan to Ukraine or North Macedonia to Cabo Verde. The speakers on the AI panel included Keith Strier, Nvidia’s vice president for worldwide AI initiatives; Benjamin Brake, director general of the German Federal Ministry for Digital and Transport; and Elsa Pilichowski, director of the OECD’s directorate for public governance. Florian Marcus, a project manager at the Estonian IT company Proud Engineers, moderated the panel.
73% of countries deploy AI
According to Brake, governments have to contend with public scepticism around innovation, and as such, the challenges of introducing AI-supported services in the public sector are also hindered by people’s attitudes toward new technologies, as well as uncertainty regarding what these technologies even do.
Larger, federated governments, such as Germany, however, believe that they need to use AI to continue offering services to their citizens, Brake said, and he noted that German delegations have been the most frequent visitors to the e-Estonia Briefing Centre to see how smaller, innovative states have begun to integrate AI into their public services. Brake said that governments need to push ahead with their plans. “Most governments need to become more resilient in testing and sandboxing these new technologies,” Brake said during the discussion.
He noted that the German Federal Ministry for Digital and Transport has already rolled out some initiatives around AI. For example, the ministry announced investing €8.3 million into intelligent mobility and logistics projects last year.

Elsa Pilichowski from the Organisation for Economic Cooperation and Development said that despite public scepticism toward AI, governments are slowly adopting it for use in pilot projects. Seventy-three per cent of the countries that report to her directorate have started using AI in their services, she said, to improve service delivery based on user experiences. But integrating AI more deeply in governance will require having data ethics and data management rules in place “because we know that AI can lead to biased decisions,” Pilichowski remarked. Governments also need to strive to include data from citizens who are not as present in the digital world as others, such as pensioners, she noted and should make investments to do so.
“It’s very important for governments to put these conditions in place so that when AI is put in place, we don’t have situations where failures could lead to less trust in government,” said Pilichowski. “Government is not a private company,” she added.
To regulate or not to regulate?
Brake pointed out that the OECD actually issued its AI Principles in May 2019 and said that the principles could be used to inform the deployment of AI-supported public services. Still, he said it would take “a bit of courage” to implement such services. Brake noted that decisions on where to apply AI solutions in state services should also be undertaken carefully. Brake maintained that initial AI services could provide automatic decision-making based on standard protocols, freeing public employees to focus on more complicated cases.
“We’re doing this for many reasons, but first of all, we can’t afford not to do so because there will be a lack of employees,” said Brake. “We need to use AI for automatic decision-making.”
For his part, Nivdia’s Strier tried to provide a historical perspective on AI adoption. He agreed with the other panellists that the deployment of AI in governance is still “extremely limited,” with a few early adopter countries testing the technology’s potential. However, he noted that the current dialogue regarding AI was no more than a decade old and that countries didn’t even start formulating an AI strategy before the World Economic Forum published its Fourth Industrial Revolution white paper.
“If you go back to 2017, there wasn’t a single country in the world with an AI strategy, and presidents and prime ministers were not discussing this topic,” Strier said. “So I think that within five years, we have made progress in terms of collaboration and policymaking and towards regulation,” he said. Strier compared AI to the development of regulations around the automotive industry, pointing out that from the invention of the automobile in the 1880s, it took about 80 years before manufacturers were obliged to put safety belts in their cars and a century until governments began mandating the use of safety belts.
Brake also noted that any regulations formulated at the moment would likely evolve as AI continues to develop. “We are talking about a bunch of questions to which we might not have the answer,” he said. Policymakers need to discuss what regulation is needed, as well as efforts at the EU level to regulate AI, such as the potential AI Act, are the right approaches to doing so.
Nothing is stopping AI
There is also the question of how to address failures of AI implementation. Florian Marcus noted that Australia had tried to roll out a national electronic identity card multiple times, but such projects have failed each time. “People are not ready to forgive the government,” he said. “If there is a government hack, too many politicians will be tempted to say, you know what, scrap the whole thing,” said Marcus. “There is a lot of scepticism regarding technology and perhaps an unrealistic expectation of perfection toward technologies.”
Strier agreed. He noted that accidents involving autonomous vehicles are often highlighted in the media, while people accept car accidents by human drivers as a part of life. “About 1.3 million people die each year in car accidents. That has become a standard, accepted statistic.”
But here, private sector adoption of AI might help to eventually drive public acceptance of the technology, according to the OECD’s Pilichowski. “If the rest of society is moving with AI, citizens will have a requirement that the public sector uses AI to improve services where AI can be useful,” Pilichowski said. She noted that governments have to put all the necessary safeguards around AI going forward, but that there is no stopping AI’s eventual adoption.
“It’s inconceivable that we can continue to deliver services without using AI in the future,” she said. “How we deliver these public services is crucial for trust.”
If you missed the event, you can watch all the Digital Summit’s panels on YouTube.