Emerging Technologies

How we can ensure AI develops as a force for good rather than harm

Children touch the hands of the humanoid robot Roboy at the exhibition Robots on Tour in Zurich, March 9, 2013. A project team composed of scholars and industry representatives developed the prototype of the tendon driven humanoid robot Roboy within nine months.  Roboy was unveiled to the public today during the exhibition that is marking the 25th anniversary of the Artificial Intelligence Laboratory of the University of Zurich (AI Lab). REUTERS/Michael Buholzer (SWITZERLAND - Tags: SCIENCE TECHNOLOGY SOCIETY BUSINESS) - BM2E93916AH01

AI development is unfolding against a backdrop of an increasingly contested geopolitical and geotechnological environment Image: REUTERS/Michael Buholzer

A convergence of the informational, kinetic and biological worlds, artificial intelligence (AI) animates the imagination for many reasons. Its simulation of human-learning processes, including self-correction and refinement of reasoning, offers quicker, tailored possibilities beyond simple automation. In a greying world population where life expectancies are longer and fertility rates are falling, smarter machines in the service of (wo)man make sense.

AI will only grow in appeal as economies and militaries seek enhanced operational and industrial efficiencies; as connectivity improves with lower-latency capabilities; and as the ‘Internet of Things’ becomes more ubiquitous.

There are two matters of digital disjuncture related to AI that warrant closer treatment: the first relates to the evolving global order; the second to inclusivity in governance.

Have you read?

Worldwide spending on AI systems is estimated to reach US$79.2 billion by 2022, a figure more than double the forecasted US$38.8 billion this year. Although much of AI hardware and software investment is being driven by the private sector in industries such as retail, banking and manufacturing, its development is unfolding against the backdrop of an increasingly contested geopolitical and geotechnological environment.

The United States and Western Europe currently lead spending on AI systems, yet the strongest growth figures are forecasted for Asia. China is expected to account for nearly two-thirds of regional spending on AI systems in the next few years, and the rest of the Asia-Pacific will demonstrate the fastest AI adoption rate in the world. This is hardly surprising given the region’s developing base and relatively steady economic growth rates. In the next few years, most AI investment in the Asia-Pacific will be focused on building infrastructure – both hardware and software.

5G connectivity

In order to leverage AI to its full potential, deployment must be underpinned by sufficient bandwidth and low-latency connectivity. In the near future, the ‘Fourth Industrial Revolution’ will surf the waves of fifth generation (5G) connectivity. The question that follows, then, is who will provide that 5G support in the core and on the edge? If national power is determined by industrial and economic competitiveness, and technology holds the key to both, then how will technology intersect with the global order? Indeed, what will a global rules-based order look like ten to 50 years from now? Are there alternatives beyond existing reductive narratives of a liberal, rules-based order versus authoritarianism?

If the cyber-focused discussions in the United Nations on ‘developments in the field of information and telecommunications in the context of international security’ are any indication, debates on the governance of AI may similarly − and regrettably − tread the binary pitfalls of ‘the West (plus the like-minded)’ and ‘the rest’. The reality is that there are distinct outlooks within and between these two camps on how the application of technology should be governed across borders. It makes more sense to identify and connect these commonalities than to further fragment the emergent technological space. The latter flies against the reality of interdependence.

Beyond the developed world

There are already efforts to outline a shared understanding on responsible behaviour with regard to AI. The Organisation for Economic Co-operation and Development recently unveiled a non-binding set of principles for AI adopted by 42 countries. Likewise, the European Commission, G20, and Nordic and Baltic states have all published documents related to the development, deployment and uptake of AI. These are encouraging steps in the right direction. Yet, for there to be meaningful international cooperation and partnerships on AI, there must correspondingly be greater, more active participation in conversations on the governance frameworks of AI from beyond the developed world. Demographic trends underscore this point.

Between now and 2050, more than half of the projected growth of global population is expected to take place in sub-Saharan Africa. By the turn of the century, the United Nations estimates that Africa’s population will grow to 4.3 billion, just under 4.7bn in Asia. India is due to overtake China as the world’s most populous country around 2027. If one of the promises of AI is to positively impact the lives of those beyond the developed world, then those beneficiaries should have agency and representation in determining how AI should and will change their lives.

Like security, inclusivity in AI systems and processes should be built in rather than patched on at a later date

If the world’s largest future populations are to be concentrated in Africa and Asia, then it seems reasonable to expect governance structures of AI to also reflect the standpoints, expectations and value systems that may be unique to those regions.

Inclusivity by design

Like security, inclusivity in AI systems and processes should be built in rather than patched on at a later date. Given industry leadership in some countries and prominent government guidance in others, cooperation in AI development and deployment will necessitate a multi-stakeholder effort at the national, regional and international levels.

Firstly, there must be conversations between the technological haves and the have-nots. These exchanges should take place on a public–private–people plane: between governments and their populations; between the public and private sectors; and between and among governments. These consultations may occur at the formal, official level. But they should be supported and complemented by candid, informed deliberations at the Track 1.5 level where ideas can be socialised, perspectives can be challenged, and positions can be refined. Think tanks, in particular, can offer a valuable arena for open minds to meet behind closed doors.

Secondly, it is imperative for countries in the middle to be a part of these conversations, either as active participants or proactive initiators. Doing this among themselves and with the major AI players can help keep communication channels open as sharpening political divides spill over into the technological realm.

Thirdly, these conversations should take place now, rather than later when governance models have already begun to crystallise and there is less incentive for the accommodation of divergent views.

Strength in numbers

A country’s interest and participation in AI discussions will naturally be a function of its priorities and resources. For most, interest and resources hardly ever match. This should nevertheless not deter less technologically mature countries from helping to shape the international governance structures of AI, especially if they are to be its largest consumers in the long run. As smaller countries often rely on strength in numbers, regional organisations such as the Association of Southeast Asian Nations can play a key role in compensating for size.

Fourthly, discussions are most constructive when they are informed. Government policies are usually strengthened by domestic consultations, which, in turn, are most useful when stakeholders are aware of the issues and understand their impact and implications.

Deliberations on technology can often seem daunting but this is where capacity-building fits in. In the same way that the tech sector currently assists with digital-literacy and cyber-hygiene campaigns, industry spearheads can help demystify AI and provoke deliberation on its way forward. The state of the international cyber-norms discussion indicates that industry, by virtue of its role as innovator, service provider and first-line crisis responder, will similarly provide AI thought leadership on key issues in concert with, parallel to or ahead of government. This is, in fact, already happening among major individual players such as Google, Intel, Microsoft and Telefónica.

Cooperation on AI can and must bridge geotechnological lines

Fifthly, partnerships are most meaningful when they are open and collaborative. There are two aspects to this, the first of which is that expert partnerships should cut across different backgrounds. The Partnership on Artificial Intelligence to Benefit People and Society − a largely American grouping that includes think tanks, civil society, academia, international organisations and even the Chinese tech giant, Baidu − proves that cooperation on AI can and must bridge geotechnological lines.

This becomes even more important in the backdrop of worsening political and trade relations between the world’s two major powers. The second aspect is that capacity-building relationships should be viewed beyond a purely transactional lens. It would be naive to think of capacity-building as purely magnanimous, of course. However, with AI, a genuine two-way partnership can build capacity on both sides even if one party is more technologically mature than the other. For example, a provider’s own technical capacity could be contextually enhanced with a better understanding of the recipient’s cultural nuances. This would, in turn, improve machine learning while empowering both parties in different but mutually advantageous ways.

National power

As technology looks set to increasingly become a determinant of national power, political and strategic tensions will coalesce and intensify around developments such as AI. Correspondingly, market access and dominance, as well as technical standards-setting for the next generation of technological infrastructure, will become greater points of contention. This will have significant implications for multinational providers servicing publics around the world.

In the long view, these developments could reshape political conventions and economic and trade regulations, as well as the applications of international law. In other words, it could disrupt the current global order. As in the digital world, setting aside the hype, this may not necessarily be a bad thing. Context, communication and cooperation are therefore critical to ensuring that AI develops as a force for good rather than harm. The premise for inclusive, transparent and representative AI governance structures could not be more important.

Artificial intelligence: the case for international cooperation, Elina Noor, the International Institute for Strategic Studies

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Artificial Intelligence

Related topics:
Emerging TechnologiesGeo-Economics and Politics
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum