Tackling humanitarian crises with AI: interview with Dr Julien Cornebise


Article content

The third Tallinn Digital Summit was recently held in Tallinn with a focus on AI for public value and to mark the occasion we spoke to Dr Julien Cornebise, we talked about the hype around AI, but also all the good it could be used for with the right incentives. He is an awarded scientist who has worked with Amnesty International and he was an early employee at DeepMind. As of October 1st 2019, Julien has moved on from Element AI following a reorganisation, and he is now setting up his own separate organisation to further this philosophy, while also being a Honorary Associate Professor at University College London.

At Element AI you work with various external stakeholders to solve humanitarian issues – has any government so far approached you as well to tackle similar problems?

We have a government team within Element AI and we’ve had some requests, but like every contact we get – whether it’s from NGOs, agencies or governments – we make very sure to explain to the people who reach out to separate the hype from reality. Because around AI there’s a huge amount of hype. In some cases, we’ve said that it’s not feasible now, but maybe after a few more years of research. More generally, yes, we want to work with governments around AI for good, because the sustainable development goals are not just for NGOs. If we only leave it to NGOs, then we’ll never reach them, because sadly they are direly underfunded.

So, how do you solve a humanitarian crisis with AI?

You don’t solve it with AI, that’s the trick! You solve it with people. AI is only here to magnify the effect and the intent of the people. No technology is a magical solution – except for maybe vaccines. Kentaro Toyama used to say that technology is only a magnifier of human intent, it’s not the solution. It’s exactly that with AI – you can use it for terrible surveillance or you can use the exact same tool for detecting tear gas canisters used against civilians in online videos. It’s the exact same tool, but you put it in the hands of people with right motivation. That’s what we do, we find partners who work towards solving humanitarian problems or any of the SDGs. We work with them to find out where we can help scale up. Whether it’s working with satellite imagery or finding certain types of weapons in big video archives or Twitter abuse against women. These are all projects that we have taken, where we used machine learning to just magnify. AI is like an exoskeleton for humans or, I think Steve Jobs used to say, a bicycle for the mind. The next step is to move past scaling projects linearly by the number of people you put on them. When you look at how tech has an impact at scale, it’s through a product or a tool how you reach a hundred times more people. So that’s what we’re trying to do with our partners – to help develop a tool that is usable for all their needs and we can take ourselves out of the loop as much as possible. This will take time, of course, but this is what where aiming for.

Within the context of the UN SDGs, which of them in your opinion would be the easiest and quickest to achieve – the lowest hanging fruit so to speak – through applying AI?

It’s exactly like if you ask which SDG can you solve with electricity? It’s just a tool that you can apply in many different places. Some are easier to do than others, but none of the SDGs are quick wins or easy – whatever tool you throw at it. They are massively difficult and as an individual I’m scared of how little progress there seems to be on many of them while the clock is ticking. Maybe with AI we can accelerate and if we do that I would already be happy with what we contribute, but it’s going to take much more than AI. If you look at AI right now in terms of what is low hanging fruits – computer vision; analysing images and videos based on algorithms. That’s why you see face recognition appearing everywhere. The SDG are at scale – they touch the whole world, so where can we get computer vision on the planet’s scale? Satellite imagery, remote observation, Earth observation. That’s where we’re putting a lot of effort: in helping NGOs, agencies, governments to use AI to make sense of satellite imagery in line with the SDGs. That can be aligned with climate action, zero hunger or even peace. For example, detect mass graves just when they appear, before they spread to the whole region. It can also go with gender equality, if you can identify online memes that are offensive to women and posted to women. I mean these are all low-hanging fruits there that can apply to multiple SDGs. After that it will be – it’s not yet at the level of computer vision – but it’s natural language processing. It’s hard and a lot of the stuff out there is not really as good as they appear in the newspapers. But there’s real progress being made and 2018 had quite a few algorithmic real innovations and things are certainly working that didn’t before. In our work we focus about 70% on computer vision and 30 % on natural language processing. The nice part is that both are underpinned with some similar problems and some similar solutions.

What are the biggest challenges in data accessibility related to achieving the UN SDGs?

Data access – I see two big problems there. Let’s make it three. Actually, if we talk longer, I could go to twenty. The first one is that the data right now is not necessarily where we need it. There’s a huge concentration of data in the hands of massive tech actors and while some of them might have started with good intentions, it still makes it hard to leverage it for public good. So that’s one challenge there – that the know-how on collecting data, storing data is really not necessarily with bodies whose incentives are to solve the SDGs. This might be solved with some partnerships or with the second point, which is data licenses. Right now, if I want to do any project as a for-profit – I mean Element AI is for profit – I must be extremely careful about the terms of the license of the data. For example, some of the things I did as a volunteer for Amnesty International, I would not be able to do at Element AI. This penalizes the guys who are trying to do right, because none of the licenses existing – or very few licenses – are clear about whether you can train AI on the data or not. They say no derivative work, but is training a neural network a derivative work? It’s not just about adding something to the picture, it’s about combining it with tons of other pictures. The licenses are very ambiguous about that, they don’t say anything. As someone trying to do the right, we’re going to err on the side of caution and not use the image. While someone who’s less scrupulous and has other incentives might just use it.

There is the Montreal Data License – this is also the first time we had lawyers presenting something at a machine learning conference – and it’s open source. You go to the site, answer a few questions about your dataset and the uses you want and then it generates license terms for your data. The more people use it, the better because suddenly we’ll all be able to know better if we can use the data or not. So that’s one legal challenge around the terms and conditions – there are things that we can technically do, but we cannot because of the licenses and who owns the intellectual property of the resulting models.

Finally, the third point is that with data you provide a lot of information and potentially a lot of control. Even if we say that for this SDG we want to collect tons of information on, say, where every village is situated in Darfur. That’s the work we did with Amnesty International – trying to find the villages in Darfur because there’s no map – to monitor them and to be able to report on their destruction. The exact same data could also be used for destroying the villages. This is the most clear-cut example, but in the fight against terrorism, we need to do face recognition. For example, London is now starting to have face recognition in public places like King’s Cross to fight against terrorism. Great, but there are other uses there as well. Same for monitoring our internet access: we want to prevent child pornography, so we’d like every adult website to verify age with ID. That’s a slippery slope from there – to balance the data collection with its use. That’s something that makes us very careful about who we work with, how much control there is. The thing is that even if you have the best contracts and the best intentions in mind, you can always have a nefarious actor that is going to hack the data. Suddenly ooops! It’s often not a matter of will the data leak, it’s WHEN will the data leak. As have also many of the big leaks proven even for corporations and with very little consequences for these companies.

You recently participated in ESA’s phi-week, can you tell me more about the problems/solutions related to space that we are currently ready to tackle through AI?

There are many but being able to observe our Earth from the sky and analyse it at scale is mind blowing. In the past the examples have been like “look we can count the cars in a parking lot” and estimate the local economy based on that. Or you get hedge fund managers, who can be very excited at the prospect of counting the number of containers in a harbour to estimate the economic activity of the port. That’s good, but not the most exciting. What is exciting is when you start to put all of this together – now you’ve got a complete picture of the local economy.
I mentioned monitoring humanitarian crises before they develop. A lot of the data sharing models from satellite companies are about doing disaster relief. There is an earthquake somewhere, we open the imagery right there, right now. Yeah, but by then it’s too late! You want to have access to the archive and people to train models long before anything happens anywhere to train a neural network. Then when something does happen you click and enable the algorithm. Now you’re actually going from reacting to preventing and monitoring. Of course, land management and monitoring the crops. Another UN environment programme just put out a fantastic Medium post on the digital ecosystem for the planet. They are looking into how the impact of climate on food production and other societal factors will create the conflicts of tomorrow. We’re seeing that in Syria: a series of droughts can be attributed for driving the farmers towards the cities, which are overpopulated and rife with clashes, and that lead to the terrible outcomes we know.

Being able to anticipate all that by seeing everything on scale is just mind blowing – all the potential there! There are quite a few things that we can directly tackle, and all of these are within our reach. What is standing in our way is the access to the data, funding, the companies that are putting the satellites online. Buying a single snapshot of Darfur, including the discount for NGOs, is around four million dollars! That’s not the kind of money you have lying around as an NGO, but it’s peanuts for bigger initiatives and partnerships. We’ve been working ourselves on satellite super resolution, which is taking cheap low-resolution images instead of the expensive high-resolution ones. The digital low-res images you get more often – you get one every day. While for a high-res image you need a very expensive satellite, which is the size of a bus. Instead you have a few satellites that take several of these low-res snapshots, combine the information together, and you get high-res information out of that cheaply and at a higher frequency. That’s something you can use right there.

What do you think are the biggest risks in developing AI in general? Do you think the conversation will shift from “robots are taking our jobs” to something else anytime soon?

If you look at the risks, there are many that are overblown, and some are beyond AI, which goes back to the incentives that we have set as a society. AI just accelerates how we feed these incentives and the bad design and the ugly side effects of them. Actually, by saying that they’re going to take our jobs, this is already more discerning than many of the other things that are being hyped. Terminator, Skynet, the singularity taking over humanity… There’s quite a lot of talk like that, but if you look at autonomous weapon systems, they do not take the shape of Terminator. Look at the slaughterbots campaign by Stuart Russel, against the development of lethal autonomous weapon systems, which explains what you can already do with a small flying drone and that’s quite scary.
Wasn’t it predicted that by now we will be working a four-hour week? That’s not what happened, even though we have computers everywhere and so many tasks have been made so much easier. In terms of jobs there is a way forward, it just takes massive political courage and a lot of fighting against misaligned incentives.

The main worry that I have and that is immediate is around mass surveillance. In a way the autonomous robots, the slaughterbots, are empowered by that same technology to be able to recognise a face and the political affiliations. Privacy is a basic human need, even in a well-meaning environment. In Hong Kong we’re seeing that the protesters are trying all kinds of tricks to evade surveillance. They’re trying to point lasers at cameras to avoid face recognition. That was made forbidden, so they put on laser shows to not be accused of evading surveillance. This to me is deeply worrying because now with AI you can easily scale tracking specific individuals. That is absolutely chilling, even if it starts with the best intentions of predicting against whatever terrible danger. Was it Franklin who said those who are willing to trade privacy for security deserve neither? And it worries me because, you know, 1984 was supposed to be a warning not a manual.

Who should have the ownership of AI solutions – companies or governments?

For all the reasons that I mentioned above, it’s neither and both. I think the same thing applies to any technology. I’m also careful with the term AI, because it tickles the fantasy of people – the need to create in your own image, which you also find in every religion and every myth, there’s a lot of things that we are projecting as a species. Let’s look at it more as machine learning or algorithmic interpretation of data. Artificial intelligence might not deliver – and that is very likely to be the case. For example, when Garry Kasparov played chess and for decades we’d say once a machine beats a human at chess we’ll have solved intelligence. And then Deep Blue did beat Garry Kasparov and we’d say “ah, but that’s a dumb algorithm”. The goalpost is always moving around AI, and in the future some these things will really become a standard to use. With that in mind going back to the question – should it be government, corporations or the third sector – it’s going to be all the above really and it’s got to be. Especially if we want to work on solving the SDGs. It takes a village to solve any single part of these problems. It’s probably going to take the scale of corporations and some of their agility and capital, along with the drive towards public good by governments. And I think we also need regulations, which is something that we still do not have.

You can watch the summit talks here on their Youtube channel.

Written by
Mari Krusten

Communication Manager at e-Estonia Briefing Centre


Visit us physically or virtually

We host impactful events both in our centre and online for government institutions, companies, and media. You’ll get an overview of e-Estonia’s best practices and build links to leading IT-service providers and state experts to support your digitalisation plans.

Questions? Have a chat with us.

Call us: +372 6273157 (business hours only)

Find us

The Briefing Centre is conveniently located just 2 minutes drive from the airport and 10 to 15 minutes drive from the city centre.

You will find us on a ground floor of Valukoja 8, central entrance behind the statue of Mr Ernst Julius Öpik. Photo of the central entrance.

Valukoja 8
11415 Tallinn, Estonia