Emerging Technologies

The new geopolitics of artificial intelligence

Sophia is a humanoid robot that can converse and make realistic facial movements

Sophia is a humanoid robot that can converse and make realistic facial movements Image: REUTERS/Valentyn Ogirenko

Eleonore Pauwels
Senior Fellow, Global Center on Cooperative Security

The multilateral system urgently needs to help build a new social contract to ensure that technological innovation, in particular artificial intelligence (AI), is deployed safely and aligned with the ethical needs of a globalizing world.

Swarms of bots, Facebook dark posts and fake news websites have claimed online territory, with significant repercussions globally. Just consider a few recent events: in the 2016 US presidential elections, Russia empowered one candidate over another through a massive campaign that included paid ads, fake social media accounts and polarizing content.

In China, tech giants Alibaba and Tencent have deployed millions of cameras equipped with facial recognition to commodify continuous streams of intimate data about citizens. In Myanmar, a UN report confirmed that Facebook posts have fuelled virulent hate speech directed at Rohingya Muslims.

The powerful and lucrative alliance between AI and a data-driven society has made social networks the architects of our exchanges, the new masters reshaping the very fabric of reality.

In this context, public anxiety is rising about the loss of control to an algorithmic revolution, which seems to escape our modes of understanding and accountability. Trust in national and global governance is at breaking point.

Concurrently, AI-driven technologies will tend to undermine, rather than enforce, global governance mechanisms. The UN faces a sweeping set of interrelated challenges. Let’s look at three.

AI and degradation of truth

First, AI is inherently a dual-use technology whose powerful implications (both positive and negative) will be increasingly difficult to anticipate, contain and mitigate.

Take Deepfake as an example. Sophisticated AI programmes can now manipulate sounds, images and videos, creating impersonations that are often impossible to distinguish from the original. Deep-learning algorithms can, with surprising accuracy, read human lips, synthesize speech, and to some extent simulate facial expressions.

Once released outside the lab, such simulations could easily be misused with wide-ranging impacts (indeed, this is already happening at a low level). On the eve of an election, Deepfake videos could falsely portray public officials being involved in money-laundering; public panic could be sowed by videos warning of non-existent epidemics or cyberattacks; and forged incidents could potentially lead to international escalation.

The capacity of a range of actors to influence public opinion with misleading simulations could have powerful long-term implications for the UN’s role in peace and security. By eroding the sense of trust and truth between citizens and the state - and indeed among states - truly fake news could be deeply corrosive to our global governance system.

AI and precision surveillance

Second, AI is already connecting and converging with a range of other technologies, including biotech, with significant implications for global security. AI systems around the world are trained to predict various aspects of our daily lives by making sense of massive data sets, such as cities’ traffic patterns, financial markets, consumer behaviour trend data, health records and even our genomes.

These AI technologies are increasingly able to harness our behavioural and biological data in innovative and often manipulative ways, with implications for all of us. For example, the My Friend Cayla smart doll sends voice and emotion data of the children who play with it to the cloud, which led to a US Federal Trade Commission complaint and its ban in Germany. In the US, emotional analysis is already being used in the courtroom to detect remorse in deposition videos. It could soon be part of job interviews to assess candidates’ responses and their fitness for a job.

The ability of AI to intrude upon - and potentially control - private human behaviour has direct implications for the UN’s human rights agenda. New forms of social and bio-control could in fact require a reimagining of the framework currently in place to monitor and implement the Universal Declaration of Human Rights, and will certainly require the multilateral system to anticipate better and understand this quickly emerging field.

Battlefield AI

Finally, the ability of AI-driven technologies to influence large populations is of such immediate and overriding value that it is almost certain to be the theatre for future conflicts. There is a very real prospect of a “cyber race”, in which powerful nations and large technology platforms enter into open competition for our collective data, as fuel to generate economic, medical and security supremacy across the globe. Forms of “cyber-colonization” are increasingly likely, as powerful states are able to harness AI and biotech to understand and potentially control other countries’ populations and ecosystems.

Towards global governance of AI

Politically, legally and ethically, our societies are not prepared for the deployment of AI. And the UN, established many decades before the emergence of these technologies, is in many ways poorly placed to develop the kind of responsible governance that will channel AI’s potential away from these risks, and towards our collective safety and well-being.

In fact, the resurgence of nationalist agendas across the world may point to a dwindling capacity of the multilateral system to play a meaningful role in the global governance of AI. Major corporations and powerful member states may see little value in bringing multilateral approaches to bear on what they consider lucrative and proprietary technologies.

But there are some innovative ways in which the UN can help build the kind of collaborative, transparent networks that may begin to treat our “trust-deficit disorder”.

Spurred on by a mandate given to the United Nations University (UNU) in the Secretary-General’s Strategy on New Technologies, The Centre for Policy Research at the UNU has created an “AI and Global Governance Platform” as an inclusive space for researchers, policy actors, corporate and thought leaders to explore the global policy challenges raised by artificial intelligence.

From global submissions by leaders in the field, the platform aims to foster unique cross-disciplinary insights to inform existing debates from the lens of multilateralism, coupled with lessons from on the ground. These insights will support UN member states, multilateral agencies, funds, programmes and other stakeholders as they consider both their own and their collective roles in shaping the governance of AI.

Perhaps the most important challenge for the UN in this context is one of relevance, of re-establishing a sense of trust in the multilateral system. But if the above trends tell us anything, it is that AI-driven technologies are an issue for every individual and every state, and that without collective, collaborative forms of governance, there is a real risk that they will undermine global stability.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Emerging TechnologiesGlobal Cooperation
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why is human-first design essential to the future of the internet?

Matt Price and Anna Schilling

November 20, 2024

We asked 4 tech strategy leaders how they're promoting accountability and oversight. Here's what they said

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum