Articles

Is superintelligence a threat for human decision-making?

Article content

Even though the current level of artificial intelligence does not pose any real threats to humankind, we need to ensure ourselves for the time when superintelligence will take decision-making away from us, says engineer, investor and futurist Jaan Tallinn. One of the initial creators of Skype and the legendary file sharing program Kazaa talks about the existential threat of superintelligence, the boom and real perspective of blockchain and the business opportunities behind it. 

 I feel that there was a sort of explosion a couple of years ago after which the whole topic of Artificial Intelligence (AI) suddenly sprang into a wider audience’s consciousness. All of a sudden we had Siri, Amazon’s Alexa and we started talking about self-driving cars. Jaan Tallinn, how did it happen?

There were two different explosions. One of AI itself and the other of AI’s risks. I believe that a lot of the latter had to do with the works of Elon Musk and Stephen Hawking. Most importantly, the former was the revolution of deep learning. All of a sudden we had graphics processing units that were very capable of deep learning.

 

Can you explain what AI actually means and how we will feel it in our everyday lives?

The problem of AI is that it means so many different things to different people. Often it makes more sense to talk about it without ever mentioning the term itself. There is a huge difference if we talk about AI that is less intelligent than humans or the super intelligence that outsmarts humans. Then there is narrow AI, which means that it is smarter than us but only in one narrow field. Some AI practitioners never see further than that what already exists.

 

We still haven’t reached superintelligent AI. When could that happen?

It depends. As I said, in some narrow fields it already exists. For example, in certain games. But it is not generally superintelligent AI. Some of the most impressive news last year was about the creation of AlphaZero, which is able to learn how to play different games independently and was able to defeat the world champion programs in a very short time.

 

Let’s talk about the threats of AI. It is a field where science fiction meets real life and as a result, a lot of myths are created. What should we really be worried about and prepare for?

If we talk about short-term threats, one of the most important questions is what will happen to democracy. We have a situation where a small group of actors can manipulate the majority as they please. It is a huge issue. Having said that, personally, I am more involved with long-term problems. For the last 100 000 years, we have lived in the environment where the main force that defines the future of the living world was humankind. As a result, we have developed lots of habits and intuitions that we are stuck with. But when we reach super intelligence, it will not be humans who are in control anymore. Having said that, we are hopefully still far away from superintelligence. We need to make it the goal of AI to sustain that narrow range of environmental parameters that enable continued biological life on the planet. By default, it wouldn’t be in AI’s interest. It is not probable that the existence of atmosphere would suit the needs of AI. There are two significant tasks that AI is interested in pursuing. First, to maximize its computing power. Second to colonize the galaxy, because that’s where the majority of computing resources are. If it is AI’s goal to get as quick and easy access to the rest of the universe, it would make sense that AI manipulates the environment of the Earth in a way that would make the planet uninhabitable for all biological forms. It might sound absurd and sci-fi but if you go through all the steps of the argument one by one, it would be difficult to counter that argument.

 

Does it mean that we are trying to build AI to be of assistance to us, but in fact, it will destroy the physical environment we need to live?

Yes, at least we can’t rule it out. Of course, it makes sense to develop AI that would be of help to us and we need to act so that it could be useful for us. The question is: what will happen when our goals and the goals of superintelligence do not align?

 

There are so many stakeholders in AI. For one, states such as the US, China or Russia, then there are giant corporations like Amazon or Google, and finally non-state players such as even ISIS. What can be done to counter these existential threats?

This is exactly the essence of the discussions now among the AI safety community. But there are no answers. A friend counted that currently there are nine people with a PhD who are employed full time to work on AI security. It is not a lot. We certainly need more top scientists to deal with the issue of what the future of humankind might depend on.

 

What are the possible solutions? It is hard to see that any regulation or agreement would help. It only takes one foul player, a kind of Mr Blofield from a James Bond movie…

My approach is kind of on a meta-level. I am trying to grow and support the ecosystem of organizations that tackle existential risks. Currently, I support around ten different organizations that have different approaches to minimize long-term risks. They also sort of absorb people who are interested in the topic and want to contribute.

 

Seems like an “end of the world” talk we’re having.

I don’t want to say that such an end is inevitable. My good colleague Max Tegmark, with whom we established the Future of Life Institute, compares it with buying home insurance. When you buy a house it makes sense to also sign up for insurance, even though you don’t plan for the house to burn down. But as long as you can’t rule it out completely, it would be responsible to prepare.

 

What you are saying is that we have reached a critical stage in the development of civilization.

Potentially, yes. But there is also a chance that nothing will happen. Another century will pass and a new generation will follow another just as it has for thousands of years. The thing is that we can’t be sure of that anymore.

 

At the same time, there is a huge economic and business potential in AI. 

I invested some years ago in Amazon. It was before they announced their AWS (Amazon Web Services, cloud computing) business. When they eventually published their revenue numbers from that part of the business, the stock price jumped, because there is such strong demand for extra computing power. There is a lot of financial value in technology that helps grow computing power. But this is short-term talk again. More interesting is what will happen in the long term. Someone said that AI that can rearrange atoms at will not sit there and wait for you to buy things from it. The moment superintelligent AI emerges, the human economy will end. I don’t see a reason why it should continue.

 

You are an active investor yourself. Not long ago an AI startup where you had invested, Geometric Intelligence, was bought by Uber. The AI startup scene must be overcrowded?

There are much more startups that say they are in AI than there are reasonable ideas. AI is a buzzword. Most of the companies that say they are in AI, in reality, don’t need AI at all. All they need to use are some simple statistical methods, such as linear regression. Virtually all companies that use or develop AI in a commercial sense are dealing with it in a practical way, not fundamental. They adopt any AI fundamental research and develop it to solve one specific problem. In the context of such AI, I am not worried at all about security. Practical AI is always one or two generations behind the fundamental research. If we know that we still don’t have any big mess on the far frontier, it makes sense to presume that AI that is based on technology that is a generation or two behind, also doesn’t pose a threat.

 

A huge exception should be the military?

That is true. The Future of Life Institute published a slaughter bots video some months ago. It demonstrates how cheap, extremely intelligent and very small killer machines will be developed. As a result, nation-states will not have a monopoly on power anymore. A lot of players will be able to buy cheap small killer drones. It will create a similar problem to what we see now in cyber defence. A huge attack will occur that will bring a lot of casualties and we won’t know who was behind it − just as we still don’t know who was behind the cyber-attacks that took down British hospitals. States need to give a lot of thought to avoiding such a scenario as well as developing a defence system as a priority. It is an incredibly difficult riddle to solve.

 

You said that there are far too many companies that just say they use AI. What kind of advice would you give to investors to understand which businesses are bluffing and which not?

They use AI only nominally. Investors should learn the difference between simple statistics and deep learning. If they can do that, there is a good chance they can also evaluate why the company needs AI in the first place. There is no other general recipe. In a way, it is a race between two counterparts because the interests of investors and those of the companies are clearly not matching. The investors need to separate the wheat from the chaff. The companies, on the other hand, want to appear more like the wheat.

 

Several years ago you predicted that the market cap of cryptocurrencies would reach a trillion dollars. It was well on its way to reaching that milestone last year before dropping down again. Leaving the market value aside, what kind of perspective do you have on blockchain?

It reached somewhere around 800 billion USD last year, I predicted that by 2019 it will be a trillion. I still think that it can be. The thing is that there is a lot of crap among the gems in cryptocurrency. The comparison with the dotcom boom of the early 2000s is completely fair. At that time, whenever a business wrote in a press release that they use a web portal or something, the stock went up. Blockchain has now similarly been made a marketing tool. As businesses eventually found a good use for the internet, I am sure that blockchain will sooner or later do the same.

 

Correct me if I’m wrong, but Bitcoin was created on the wave of distrust − distrust of national currencies, state institutions, banks and corporations. Bitcoin was supposed to be bulletproof for trust issues. Is it?

[The creator of Bitcoin] Satoshi Nakamoto embedded a headline about the bailout of banks in the genesis block of Bitcoin. But that he released Bitcoin at exactly that time is a coincidence because surely he had been developing Bitcoin already for some time before the financial crash. Even if the crash hadn’t happened, he would have released it. It was a computer science innovation, rather than an act of protest.

 

The issue of trust and technology is really interesting. You said that the greatest short-term risk is that of democracy, which also relates directly to trust. Is it because of technology that democracy has become so fragile?

I’m not sure if technology has made democracy fragile, but it definitely has changed the context in which democracy functions.

 

Isn’t the real underlying problem the growing knowledge gap? The number of people who understand the basics of how critically important new technology works is so small and the gap seems to be growing. People don’t understand AI, they don’t understand blockchain, but still, go along with the investment rush. 

Welcome to the world of exponential growth. If you look at the history of world GDP over hundreds of years, you see that for a long time nothing happens and then suddenly everything happens. In a lot of spheres, the growth accelerates and it will be increasingly difficult for individuals to keep up with the pace.

Contact

Visit us physically or virtually

We host impactful events both in our centre and online for government institutions, companies, and media. You’ll get an overview of e-Estonia’s best practices and build links to leading IT-service providers and state experts to support your digitalisation plans.

Questions? Have a chat with us.

E-mail:
Media:
Call us: +372 6273157 (Monday to Friday, 9:00-16:30 Estonian time)
Regarding e-Residency, visit their official webpage.

Find us

The Briefing Centre is conveniently located just a 2-minute drive from the airport and around 10- to 15-minute drive from the city centre.

You will find us on the ground floor of Valukoja 8, at the central entrance behind the statue of Mr Ernst Julius Öpik. We will meet the delegation at the building’s reception. Kindly note that a booking is required to visit us.

Valukoja 8
11415 Tallinn, Estonia