Articles

Estonia and automated decision-making: challenges for public administration

Article content

Estonia is celebrated as a global pioneer in digital governance and has spearheaded transformative initiatives such as e-residency, secure data exchanges, and AI-driven public services. The nation is taking further innovation with automated decision-making (ADM) systems. These AI-powered tools promise to streamline administrative processes and enhance efficiency in the public sector. However, as Estonia embraces AI-driven governance, critical questions emerge: can automation uphold fairness, transparency, and accountability in public decision-making?

The transformation of public administration in Estonia

ADM systems are at the forefront of Estonia’s administrative evolution. Notable examples include:

Bürokratt, a network of virtual assistants designed by the Ministry of Economic Affairs and Communications, aimed at improving public sector communication.

OTT is a system that optimises the processing of unemployment insurance applications.

ABC Gates, which utilises biometric data to automate border and passport controls.

While these advancements showcase the efficiency of ADM systems, concerns persist about their transparency, impartiality, and adherence to ethical standards.

Risks and challenges: algorithmic bias and transparency

One of the foremost risks in ADM systems is **algorithmic bias**, where AI models unintentionally perpetuate discrimination. In public administration, biased algorithms can result in inequities such as unequal access to social benefits or discriminatory hiring practices. To address this:

– Estonia is experimenting with fairness-aware AI models, algorithmic audits, and data quality checks.

– However, ensuring fairness demands continuous oversight and policy adaptation.

Issues such as how citizens can appeal automated decisions and who oversees algorithmic processes remain unresolved.

Cases such as the Dutch childcare benefits scandal, known as the “toeslagenaffaire,” highlight the grave consequences of unchecked algorithms. A self-learning system used by the Dutch tax authority wrongfully penalised thousands, predominantly from marginalised groups, driving many families into severe financial hardship, with some even losing their children. This scandal underscores the need for safeguards against the potential harm of automated systems and calls for stronger regulation to protect citizens’ rights.[1]. Similarly, in December 2024, the UK’s Department for Work and Pensions (DWP) uncovered biases in its welfare fraud detection AI, which disproportionately targeted individuals based on age, disability, marital status, and nationality. Despite previous assurances of non-discrimination, these findings led to criticisms of a “hurt first, fix later” approach. The DWP maintains that human caseworkers make final decisions, but the incident has intensified calls for greater transparency in governmental AI applications.[2] These cases underscore the importance of safeguards, transparency, and robust regulation. As the EU Artificial Intelligence Act outlines, transparency and accountability should be fundamental principles for high-risk ADM systems.

Moreover, algorithmic transparency — the ability to understand and explain AI decisions—remains a pressing challenge and significant risk. Many AI models operate as “black boxes,” meaning that even their developers cannot fully explain their decision-making process, complicating efforts to justify or contest decisions legally. This is particularly problematic in the public sector, where decisions about social benefits, taxation, or law enforcement must be legally justified and contestable. For instance, Amnesty International has criticised a French social security algorithm for disproportionately targeting marginalised groups, leading to unfair treatment in the distribution of social security benefits. The organisation has called for its immediate suspension.[3]

Regulatory alignment: the role of GDPR and the EU AI Act

Estonia’s approach to ADM systems must align with European regulatory frameworks, particularly the General Data Protection Regulation (GDPR). Key principles outlined by the GDPR include:

– Transparency

– Accountability

– Human oversight

A recent European Parliamentary Research Service analysis emphasised that adherence to these principles is vital to prevent data misuse and protect citizens’ rights.[4] The EU AI Act also underscores the importance of transparency and accountability for high-risk ADM systems. Non-compliance could erode public trust in Estonia’s AI governance model.

A vision for ethical AI governance

The future of AI-driven governance isn’t just about efficiency—it’s about trust. If Estonia successfully balances automation with ethical safeguards, it could become a global model for responsible AI governance. However, the risks could outweigh the benefits if biases go unchecked and transparency remains an afterthought. Estonia’s digital leadership inspires the world, but it must now prove that efficiency does not come at the expense of justice. Otherwise, ADM systems will not be the tools of progress they promise to be but rather, barriers to the public they are meant to serve.

As Estonia continues to lead in digital transformation, its ADM experience provides valuable lessons for other countries looking to integrate AI into public administration. Estonia can balance technological progress with ethical AI governance by maintaining a focus on transparency, accountability, and inclusivity, ensuring that ADM systems serve the public interest while upholding legal and human rights standards.

Learning for the future: collaborative projects and conferences

Estonia’s efforts provide valuable lessons for countries integrating AI into public administration. For example:

EquiTech, a European Commission-funded project (2024–2026) led by TalTech Law School, aims to enhance responses to discrimination and bias in ADM systems. Key collaborators include Tallinn University of Technology and Gender Equality, the Equal Treatment Commissioner’s Office of Estonia, the Office of the Equal Opportunities Ombudsperson of Lithuania, the Ministry of Economic Affairs and Communications of Estonia, and the Ministry of Justice of Estonia.

FutureLaw ConferenceNorthern Europe’s largest legal innovation conference will occur on May 29–30, 2025, in Tallinn, Estonia. Hosted by LEGID in partnership with TalTech Law School, the event will feature 50+ world-class speakers, 25+ hours of sessions, and eight workshops on topics like AI, automation, and regulatory compliance.

A significant focus will be on artificial intelligence’s transformative impact on justice and regulation. Concussions will feature a challenging dialogue with supreme judges and legal practitioners, including ethical considerations, potential biases, and the implications for access to justice and regulatory compliance.

The conference will also inquire into how organisations can effectively manage key risks in the digital era, focusing on leadership, budget constraints, legal and compliance strategic plans, and navigating organisational politics.

Key speakers include:

– Brian Liu, Founder of LegalZoom

– Tara L. Waters, Project Lead of Vals Legal AI Report

– Kaisa Kromhof, CEO & Founder of Ment

– Pēteris Zilgalvis, Judge at the General Court of the European Union

– Valerie Saintot, Former Head of Legislative Division at the European Central Bank

– Max Junestrand, CEO & Co-Founder of Legora (formerly Leya)

– Brian W Tang, Founding Executive Director of LITE Lab at Hong Kong University

Additionally, prominent Estonian figures such as Villu Kõve, Tanel Kerikmäe, Imbi Jürgen, Pekka Puolakka, Carri Ginter and Astrid Asi will participate. For more details and registration, visit the official website: .https://futurelaw.ee.

The article is written by Tanel Kerikmäe and co-authored by Valentin Feklistov

 

[1] https://www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

[2] https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits?utm

[3] Amnesty International. (2024, October). France: Discriminatory algorithm used by the Social Security Agency must be stopped. Retrieved from https://www.amnesty.org/en/latest/news/2024/10/france-discriminatory-algorithm-used-by-the-social-security-agency-must-be-stopped/

[4] European Parliament. (2025). Think Tank Report: EPRS_ATA(2025)769509. Retrieved from https://www.europarl.europa.eu/thinktank/en/document/EPRS_ATA(2025)769509

Contact

Visit us physically or virtually

We host impactful events both in our centre and online for government institutions, companies, and media. You’ll get an overview of e-Estonia’s best practices and build links to leading IT-service providers and state experts to support your digitalisation plans.

Questions? Have a chat with us.

E-mail:
Media:
Call us: +372 6273157 (Monday to Friday, 9:00-16:30 Estonian time)
Regarding e-Residency, visit their official webpage.

Find us

The Briefing Centre is conveniently located just a 2-minute drive from the airport and around 10- to 15-minute drive from the city centre.

You will find us on the ground floor of Valukoja 8, at the central entrance behind the statue of Mr Ernst Julius Öpik. We will meet the delegation at the building’s reception. Kindly note that a booking is required to visit us.

Valukoja 8
11415 Tallinn, Estonia