Article content
Tallinn Digital Summit is an annual congress of world leaders, IT ministers and experts, and tech communities from digital-minded nations to talk about the technology and digital transformation.
Held on October 15-16, this year’s focus was on the potential and challenges of artificial intelligence (AI) in government, economy, and society. Tallinn, the capital of Estonia, brought together influential figures and delegates from around 20 countries including keynote speakers from Google and OpenAI and think tanks from McKinsey Global Institute (MGI), Lisbon Council, Center for Public Impact (CPI), Boston Consulting Group (BCG), and the European Center for International Political Economy (ECIPE).
The Promise and Ambivalence of AI
“Companies, sectors, and even economies that make bold investments in AI will likely see significant gains, while those slow to adopt risk falling behind.” — James Manyika, McKinsey Global Institute
Packed in our mobile devices alone is a collection of applications that harnesses the capabilities of AI like facial recognition, language translator, and AI assistants like Siri and Google Assistant.
On a larger scale, for companies, industries, and even economies, early adopters are already reaping the benefits through increased productivity and ability to take advantage of new innovations. For instance, taken from ECIPE briefing notes on AI & Trade Policy, data-driven commerce in 2017 exceeded US$2 trillion, with a growth rate of 25%.
AI also takes us closer to conquering some societal problems like climate change and life-threatening diseases. But on the other hand, it remains flawed and subject to wrong judgement, may act unpredictably, and can be manipulated in certain ways that could pose danger to humanity.
Laying the Foundation for Future
“Governments — and governance — need to transition to an adaptive and resilient architecture. Something that grows in tune with the rest of the society” — Jüri Ratas, Prime Minister of Estonia
Given the potential of AI, policymakers and business leaders must recognize and embrace the challenges that it present by giving emphasize on responsible use and legislating measures to facilitate adoption, improve AI-readiness, and manage negative disruptive effects.
These concerns can range from privacy and cyber security threats to having profound impact on work because of automation. According to MGI, while 50% of work tasks will be supported or completed by AI, new jobs and demand for AI-relevant competency will rise in the next decade. In their research, 15% of global workforce will be displace by automation from 2016 to 2030 but will create up to 890 million jobs or 33% of total jobs to offset the loss.
Some initiatives are found in European Union’s General Data Protection Regulation, also called AI-Law, which lays the foundation for storing, processing, and exchange of personal data. In Estonia, the proposed Kratt-Law opens the discussion over the unintended consequences of AI and who has the legal liability if it messes up.
Making AI Safe, Secure, and Ethical for People
“AI is a strategic capability. Apply it ‘for’ the people, not to the people. We must organize data right, create value for citizens, earn public trust” — Kok Ping Soon, GovTech Singapore
Engagement with citizens help establish confidence and dispel fears. Most of the best practices point to having the needs and voices of end-users be at the center of development and decision-making.
Strict scrutiny by public sector professionals, technology experts, and institutions can provide soundness to the development of any AI system especially on matters with moral and ethical implications. In CPI and BCG research for AI in government, moral and ethical issues tops the concern of the citizens, followed by lack of transparency.
The way to create safe, secure, and ethical AI is to support the development and sharing of techniques and practices which is highly cited in Lisbon Council’s report and the spirit of Tallinn Digital Summit.