Global Risks

Read Yuval Harari's blistering warning to Davos in full

Yuval Noah Harari, Professor, Department of History, Hebrew University of Jerusalem, Israel,  speaking in the How to Survive the 21st Century session at the World Economic Forum Annual Meeting 2020 in Davos-Klosters, Switzerland, 23 January. Congress Centre - Congress Hall. Copyright by World Economic Forum/Ciaran McCrickard

Harari speaking at the 'How to Survive the 21st Century' session. Image: Boris Baldinger

Yuval Harari
Professor, Department of History , Hebrew University of Jerusalem
This article is part of: World Economic Forum Annual Meeting
  • Humanity faces three existential threats this century, warned historian Yuval Harari at Davos 2020.
  • Technology risks dividing the world into wealthy elites and exploited "data colonies," he explained.
  • "If you like the World Cup - you are already a globalist," he said, making the case for better cooperation to tackle the challenges.
How to survive the 21st century.
How to survive the 21st century. Image: All images by Arturo Rago, World Economic Forum

As we enter the third decade of the twenty-first Century, humanity faces so many issues and questions, that it is really hard to know what to focus on. So I would like to use the next twenty minutes to help us focus of all the different issues we face. Three problems pose existential challenges to our species.

These three existential challenges are nuclear war, ecological collapse and technological disruption. We should focus on them.

The challenges we should focus on.
The challenges we should focus on.

Now nuclear war and ecological collapse are already familiar threats, so let me spend some time explaining the less familiar threat posed by technological disruption.

Have you read?

In Davos we hear so much about the enormous promises of technology – and these promises are certainly real. But technology might also disrupt human society and the very meaning of human life in numerous ways, ranging from the creation of a global useless class to the rise of data colonialism and of digital dictatorships.

Technology has the potential to be highly disruptive.
Technology has the potential to be highly disruptive.

First, we might face upheavals on the social and economic level.

Automation will soon eliminate millions upon millions of jobs, and while new jobs will certainly be created, it is unclear whether people will be able to learn the necessary new skills fast enough. Suppose you are a fifty-years-old truck driver, and you just lost your job to a self-driving vehicle. Now there are new jobs in designing software or in teaching yoga to engineers – but how does a fifty-years-old truck driver reinvent himself or herself as a software engineer or as a yoga teacher? And people will have to do it not just once but again and again throughout their lives, because the automation revolution will not be a single watershed event following which the job market will settle down, into a new equilibrium. Rather, it will be a cascade of ever bigger disruptions, because AI is nowhere near its full potential.

Old jobs will disappear, new jobs will emerge, but then the new jobs will rapidly change and vanish. Whereas in the past human had to struggle against exploitation, in the twenty-first century the really big struggle will be against irrelevance. And it is much worse to be irrelevant than exploited.

Could automation create a 'useless class'?
Could automation create a 'useless class'?

Those who fail in the struggle against irrelevance would constitute a new “useless class” – people who are useless not from the viewpoint of their friends and family, but useless from the viewpoint of the economic and political system. And this useless class will be separated by an ever-growing gap from the ever more powerful elite.

The AI revolution might create unprecedented inequality not just between classes but also between countries.

In the nineteenth Century, a few countries like Britain and Japan industrialized first, and they went on to conquer and exploit most of the world. If we aren’t careful, the same thing will happen in the twenty-first century with AI.

We are already in the midst of an AI arms-race, with China and the USA leading the race, and most countries being left far far behind. Unless we take action to distribute the benefit and power of AI between all humans, AI will likely create immense wealth in a few high-tech hubs, while other countries will either go bankrupt or become exploited data-colonies.

Loading...

Now we aren’t talking here about a science fiction scenario of robots rebelling against humans. We are talking about far more primitive AI, which is nevertheless enough to disrupt the global balance.

Just think what will happen to developing economies once it is cheaper to produce textiles or cars in California than in Mexico? And what will happen to politics in your country in twenty years, when somebody in San Francisco or Beijing knows the entire medical and personal history of every politician, every judge and every journalist in your country, including all their sexual escapades, all their mental weaknesses and all their corrupt dealings? Will it still be an independent country or will it become a data-colony?

When you have enough data you don't need to send soldiers, in order to control a country.

Alongside inequality, the other major danger we face is the rise of digital dictatorships, that will monitor everyone all the time.

Does the future hold a digital dictatorship?
Does the future hold a digital dictatorship?

This danger can be stated in the form of a simple equation, which I think might be the defining equation of life in the twenty-first century:

B x C x D = AHH!

Which means? Biological knowledge multiplied by computing power multiplied by data equals the ability to hack humans, ahh.

A dangerous equation.
A dangerous equation.

If you know enough biology and have enough computing power and data, you can hack my body and my brain and my life, and you can understand me better than I understand myself. You can know my personality type, my political views, my sexual preferences, my mental weaknesses, my deepest fears and hopes. You know more about me than I know about myself. And you can do that not just to me, but to everyone.

A system that understands us better than we understand ourselves can predict our feelings and decisions, can manipulate our feelings and decisions, and can ultimately make decisions for us.

Now in the past, many governments and tyrants wanted to do it, but nobody understood biology well enough and nobody had enough computing power and data to hack millions of people. Neither the Gestapo nor the KGB could do it. But soon at least some corporations and governments will be able to systematically hack all the people. We humans should get used to the idea that we are no longer mysterious souls – we are now hackable animals. That's what we are.

The power to hack humans can be used for good purposes – like providing much better healthcare. But if this power falls into the hands of a twenty-first-century Stalin, the result will be the worst totalitarian regime in human history. And we already have a number of applicants for the job of twenty-first-century century Stalin.

Just imagine North Korea in twenty years, when everybody has to wear a biometric bracelet which constantly monitors your blood pressure, your heart rate, your brain activity twenty-four hours a day. You listen to a speech on the radio by the great leader and they know what you actually feel. You can clap your hands and smile, but if you're angry, they know, you'll be in the gulag tomorrow.

And if we allow the emergence of such total surveillance regimes, don’t think that the rich and powerful in places like Davos will be safe, just ask Jeff Bezos. In Stalin’s USSR, the state monitored members of the communist elite more than anyone else. The same will be true of future total surveillance regimes. The higher you are in the hierarchy – the more closely you’ll be watched.

Do you want your CEO or your president to know what you really think about them?

So it is in the interest of all humans, including the elites, to prevent the rise of such digital dictatorships. And in the meantime, if you get a suspicious WhatsApp message, from some Prince, don't open it.

Now if we indeed prevent the establishment of digital dictatorships, the ability to hack humans might still undermine the very meaning of human freedom. Because as humans will rely on AI to make more and more decisions for us, authority will shift from humans to algorithms and this is already happening.

Already today billions of people trust the Facebook algorithm to tell us what is new, the Google algorithm tells us what is true, Netflix tells us what to watch, and the Amazon and Alibaba algorithms tell us what to buy.

In the not-so-distant future, similar algorithms might tell us where to work and who to marry, and also decide whether to hire us for a job, whether to give us a loan, and whether the central bank should raise the interest rate.

And if you ask why you were not given a loan, and why you the bank didn't raise the interest rate the answer will always be the same – because the computer says no. And since the limited human brain lacks sufficient biological knowledge, computing power and data – humans will simply not be able to understand the computer’s decisions.

So even in supposedly free countries, humans are likely to lose control over our own lives and also lose the ability to understand public policy.

Already now how many humans understand the financial system? Maybe one percent to be very generous. In a couple of decades, the number of humans capable of understanding the financial system will be exactly zero.

Now we humans are used to thinking about life as a drama of decision-making. What will be the meaning of human life, when most decisions are taken by algorithms? We don’t even have philosophical models to understand such an exsistence.

For better or for worse?
For better or for worse?

The usual bargain between philosophers and politicians is that philosophers have a lot of fanciful ideas, and politicians basically explain that they lack the means to implement these ideas. Now we are in an opposite situation. We are facing philosophical bankruptcy.

The twin revolutions of infotech and biotech are now giving politicians the means to create heaven or hell, but the philosophers are having trouble conceptualizing what the new heaven and the new hell will look like. And that’s a very dangerous situation.

If we fail to conceptualize the new heaven quickly enough, we might be easily misled by naïve utopias. And if we fail to conceptualize the new hell quickly enough, we might find ourselves entrapped there with no way out.

Will philosophy be able keep up with machines?
Will philosophy be able keep up with machines?

Finally, technology might disrupt not just our economy, politics and philosophy – but also our biology.

In the coming decades, AI and biotechnology will give us godlike abilities to reengineer life, and even to create completely new life-forms. After four billion years of organic life shaped by natural selection, we are about to enter a new era of inorganic life shaped by intelligent design.

Our intelligent design is going to be the new driving force of the evolution of life and in using our new divine powers of creation we might make mistakes on a cosmic scale. In particular, governments, corporations and armies are likely to use technology to enhance human skills that they need – like intelligence and discipline – while neglecting other humans skills – like compassion, artistic sensitivity and spirituality.

The result might be a race of humans who are very intelligent and very disciplined but lack compassion, lack artistic sensitivity and lack spiritual depth. Of course, this is not a prophecy. These are just possibilities. Technology is never deterministic.

The future isn't set in stone.
The future isn't set in stone.

In the twentieth century, people used the same industrial technology to build very different kinds of societies: fascist dictatorships, communist regimes, liberal democracies. The same thing will happen in the twenty-first Century.

AI and biotech will certainly transform the world, but we can use them to create very different kinds of societies. And if you're afraid of some of the possibilities I’ve mentioned, you can still do something about it. But to do something effective, we need global cooperation.

All the three existential challenges we face are global problems that demand global solutions.

Whenever a leader says something like “My Country First!” we should remind that leader that no nation can prevent nuclear war or stop ecological collapse by itself, and no nation can regulate AI and bioengineering by itself.

Play at your own risk.
Play at your own risk.

Almost every country will say: “Hey, we don’t want to develop killer robots or to genetically engineer human babies. We are the good guys. But we can't trust our rivals not to do it. So we must do it first”.

If we allow such an arms race to develop in fields like AI and bioengineering, it doesn’t really matter who wins the arms race – the loser will be humanity.

Game over.
Game over.

Unfortunately, just when global cooperation is more needed than ever before, some of the most powerful leaders and countries in the world are now deliberately undermining global cooperation. Leaders like the US president tell us that there is an inherent contradiction between nationalism and globalism, and that we should choose nationalism and reject globalism.

But this is a dangerous mistake. There is no contradiction between nationalism and globalism. Because nationalism isn’t about hating foreigners. Nationalism is about loving your compatriots. And in the twenty-first century, in order to protect the safety and the future of your compatriots, you must cooperate with foreigners.

Global solutions for global problems.
Nationalism and globalism aren't mutually exclusive.

So in the twenty-first century, good nationalists must be also globalists. Now globalism doesn’t mean establishing a global government, abandoning all national traditions, or opening the border to unlimited immigration. Rather, globalism means a commitment to some global rules.

Rules that don’t deny the uniqueness of each nation, but only regulate the relations between nations.

And a good model is the Football World Cup.

The World Cup is a competition between nations, and people often show fierce loyalty to their national team. But at the same time the World Cup is also an amazing display of global harmony. France can't play football against Croatia unless the French and the Croatians agree on the same rules for the game. And that’s globalism in action.

Global solutions for global problems.
Global solutions for global problems.

If you like the World Cup – you are already a globalist.

Now hopefully, nations could agree on global rules not just for football, but also for how to prevent ecological collapse, how to regulate dangerous technologies, and how to reduce global inequality. How to make sure, for example, that AI benefits Mexican textile workers and not only American software engineers. Now of course this is going to be much more difficult than football – but not impossible. Because the impossible, well we have already accomplished the impossible.

We have already escaped the violent jungle in which we humans have lived throughout history. For thousands of years, humans lived under the law of the jungle in a condition of omnipresent war. The law of the jungle said that for every two nearby countries, there is a plausible scenario that they will go to war against each other next year. Under this law, peace meant only “the temporary absence of war”.

When there was “peace” between – say – Athens and Sparta, or France and Germany, it meant that now they are not at war, but next year they might be. And for thousands of years, people had assumed that it was impossible to escape this law.

Have broken the law of the jungle?
Have we broken the law of the jungle?

But in the last few decades, humanity has managed to do the impossible, to break the law, and to escape the jungle. We have built the rule-based liberal global order, that despite many imperfections, has nevertheless created the most prosperous and most peaceful era in human history.

The very meaning of the word “peace” has changed.

“Peace” no longer means just the temporary absence of war. Peace now means the implausibility of war.

There are many countries which you simply cannot imagine going to war against each other next year – like France and Germany. There are still wars in some parts of the world. I come from the Middle East, so believe me, I know this perfectly well. But it shouldn't blind us to the overall global picture.

Causes of Death in 2016 - obesity, diabetes and more
Causes of Death in 2016 - obesity, diabetes and more

We are now living in a world in which war kills fewer people than suicide, and gunpowder is far less dangerous to your life than sugar. Most countries – with some notable exceptions like Russia – don’t even fantasize about conquering and annexing their neighbors. Which is why most countries can afford to spend maybe just about two percent of their GDP on defense, while spending far, far more on education and healthcare. This is not a jungle.

Unfortunately, we have gotten so used to this wonderful situation, that we take it for granted, and we are therefore becoming extremely careless. Instead of doing everything we can to strengthen the fragile global order, countries neglect it and even deliberately undermine it.

The global order is now like a house that everybody inhabits and nobody repairs. It can hold on for a few more years, but if we continue like this, it will collapse – and we will find ourselves back in the jungle of omnipresent war.

We have forgotten what it's like, but believe me as a historian – you don’t want to be back there. It is far, far worse than you imagine.

Yes, our species has evolved in that jungle and lived and even prospered there for thousands of years, but if we return there now, with the powerful new technologies of the twenty-first century, our species will probably annihilate itself.

What will be left?
What will be left?

Of course, even if we disappear, it will not be the end of the world. Something will survive us. Perhaps the rats will eventually take over and rebuild civilization. Perhaps, then, the rats will learn from our mistakes.

But I very much hope we can rely on the leaders assembled here, and not on the rats.

Thank you.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Global Risks

Related topics:
Global RisksForum InstitutionalFourth Industrial RevolutionEmerging Technologies
Share:
The Big Picture
Explore and monitor how Global Risks is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

How AI can help combat mis- and disinformation about migration

Marie McAuliffe

December 18, 2024

This new insurance model could transform disaster response funding

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum