Emerging Technologies

COVID-19: Why we need to have tough conversations on the future of AI

ai artificial intelligence machine learning technology pandemic coronavirus covid response pandmeic health global future considerations questions ethical moral practical

We need a deeper conversation on what we need from AI in order to respond to future crises. Image: REUTERS/Daniel Becerril

Mark Esposito
Chief Learning Officer, Nexus FrontierTech, Professor at Hult International Business School
Terence Tse
Executive Director, Nexus FrontierTech, Professor of Finance, Hult International Business School
Josh Entsminger
Applied Researcher, Nexus Frontier Tech
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Digital Communications is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Digital Communications

  • Our adoption of AI technology has accelerated during the pandemic.
  • Applications and experimentation have ranged from patient scanning to global case tracking and prediction, write three experts for MIT Technology Review.
  • We need a deeper conversation on what we need from AI in order to respond to future crises, while not generating a more fundamental, deeper vacuum of rights.

These months have proven to be emblematic of the dangers of a hyperconnected world. Despite efforts to restrict international and domestic travel, coronavirus cases continue to grow and grow fast, and asymmetries rise around the world at a pace we may have not imagined back when 2020 started.

Yet the digital nature of our hyperconnected world may prove to hold some of the critical solutions needed to scale novel approaches to the wide array of problems directly and indirectly associated with the pandemic—solutions increasingly coming in digital packaging. The issue is of course not the virus alone: it’s more about the reactions to the virus such as information on the resources we need to allocate or an understanding of the wider consequences for businesses trying to respond.

Have you read?

Among these digital systems, few are being more heralded or considered with more promise than AI-powered solutions. Despite its current novelty, AI itself is anything but new. Its recent emergence from a series of converging trends over the last decades, most notably, the rise of graphics processing units more uniquely suited to the computational tasks of neural networks and deep learning, the rise of mass open data sets from the internet, and the expansion of new algorithms from advances in statistical learning techniques, among others. The marked increase in experimentation in the pandemic, and the ensuing interest from governments and corporations alike, represents a new state of affairs in the global conversation on AI. There is indeed new oxygen to breathe for the industry’s incumbents.

Novel cases of AI use are quickly spreading across international media, such as rapid assessment of patient scans at scale for improved covid-19 detection, improved accuracy for global case tracking and prediction, wide review and collection of online articles relevant for awareness and assessment, and advanced chemical analysis to assist vaccine creation. Want some examples?

From the BlueDot’s predictive awareness to Alibaba’s AI diagnostics ranging to transportation with the Hong Kong Mass Transit’s autonomous robotic cleaners and the herald of health-care AI with Boston Children’s Hospital’s HealthMap program, these programs have demonstrated a superior form of utilization of machine learning for the purpose of some form of public health imperative. But examples are also geographically diffused, and this is where examples from Chinese city Shenzhen’s MicroMultiCopter are worth a mention. And this is not all. Also noteworthy are DeepMind’s AlphaFold as well as the Center for Disease Control and Prevention’s assessment bot to finish with Facebook’s social network safety moderating. The icing on the cake comes from application with inherent ethical norms, such as BenevolentAI’s drug screening program.

The marked increase in experimentation in the pandemic, and the ensuing interest from governments and firms alike, represents a new state of affairs in the global conversation on AI.

As overwhelming this list of applications may have been, it demonstrates a broader public hope and commercial awareness for the increasing potential of AI as a fundamental piece of the modern technology landscape.

This said, and regardless of the hype, a dose of reality is needed as the demand for experimentation grows into a demand for scaling. As not all problems demand AI solutions, nor are all existing AI solutions up to the task of many highly uncertain problems, most of all, not all organizations are advanced enough to effectively deploy and leverage such solutions without creating second-order effects. While solutions at scale are needed, and new practices and means are in place to experiment, we need to be sure that organizations looking to put these experiments into play have a thorough understanding of what the “job to be done” really is. As with most digital transformations, such agendas are sometimes less about the technology than the culture, work, and mental models being subject to change such that new productivity, new opportunities, and new social advancement is actually achieved and made sustainable.

This concern extends to the question of how national governments and municipal actors look to leverage a new generation of emerging technologies to help improve the speed, scale, and sophistication of responses to high-impact, low-probability events like large-scale systemic shocks. As indeed, whether for governments looking to define new sectors for strategic investment for AI competency or for firms looking to scour the market for proven AI applications—similar concerns need to emerge. For a more mature conversation, we need to move from what we want AI to do towards a more real, deeper conversation on what we need from AI in order to respond to crises while not generating a more fundamental, deeper vacuum of rights.

Though indeed we need to go further—as despite the innovativeness of the cases we already mentioned, broader strategies are needed for engaging with effective foresight into the principal and value-driven challenges brought on by AI. How we create the means for effective conversations on whether to sacrifice privacy to ensure health-care capacity (or indeed whether that is a false dilemma), whether data ownership should be private or publicly managed, whether the potential inequality from some AI applications outweigh the benefits for those who can get access, and on and on.

As states look to AI to reshape their post-pandemic response, and indeed as a core element of the future of their public health responses, and competitiveness agendas more broadly, we need to have hard conversations on what the value of AI really is. Though all of this begins with a real appreciation for what AI can and cannot do when subjected to the demands of operational improvements at scale. These conversations need to happen together, and they need to happen now to build better frameworks of use. Otherwise the huge potential of these technologies will be to no avail for the betterment of society when we need it the most.

Loading...
Loading...
Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

What is the 'perverse customer journey' and how can it tackle the misuse of generative AI?

Henry Ajder

July 19, 2024

About Us

Events

Media

Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum