Fourth Industrial Revolution

Chatbots are on the rise. This approach accounts for their risks. 

Firms lack the necessary understanding of this enormously useful technology and its implications to be able to develop the governance actions that will soon be needed for compliance. Image: FLY:D/Unsplash

Venkataraman Sundareswaran
  • Regulations for AI are falling behind the exponential growth of the use of it.
  • Developing an internal governance framework for "high-risk chatbots" would take a company considerable time and resources.
  • A useful framework for chatbot use now exists that gives small and medium enterprises the opportunity to demonstrate leadership in the responsible use of AI.

Artificial Intelligence (AI) is expanding at breakneck pace into products and services that we use daily. There are well-known ethical concerns about the use of it, though. Perhaps the most widely discussed is facial recognition in public places and its impact on privacy.

An array of AI applications incite concerns regarding ethics, such as bias, transparency and explainability. Despite this, regulation of the use of AI is seriously lacking. The incremental and deliberate processes behind regulations are falling behind the exponential growth of the use of AI.

The European Commission stepped up to this challenge with its release this April of proposed new actions and rules for trustworthy AI. It was arguably the first comprehensive examination of the new norms needed to ensure that AI can be trusted.

How chatbots are set to chat even louder

Conversational AI is one of the most popular uses of this type of technology. It has found its way into smart speakers by Amazon and Google, and all smartphones have either Siri, Alexa or the Google voice assistant. It was estimated that the number of smart speaker owners in the US was set to reach nearly 94 million in 2021.

Conversational AI is also built into chatbots, which have also been widely deployed on websites, social media platforms and smartphone apps. The COVID-19 pandemic accelerated the use of chatbots in the healthcare sector.

The EU proposal is a risk-based approach, treating AI applications under four categories: unacceptable risk, high-risk, limited risk and minimal risk. For example, remote biometric identification (such as facial recognition) is classified as high-risk. This makes sense as there are several facial recognition concerns that can escalate very quickly to population-scale crises.

Chatbots used in healthcare must be classified as high-risk if concerns are to be adequately addressed.

Venkataraman Sundareswaran

However, the proposal cites chatbots as limited-risk AI systems and cautions that “users should be aware that they are interacting with a machine so they can take an informed decision to continue or step back.”

This binary decision (to use or not to use), while easy to suggest, is not always the best approach to regulation simply because the use of chatbots in society-critical areas such as healthcare results in many benefits. These include their ease of use, 24/7 access, low cost, reusability, consistency and ease of deployment.

Figure 1: Benefits of chatbots in healthcare Image: World Economic Forum

With all this in mind, the acceleration in the use of chatbots in healthcare will most likely continue beyond the pandemic, providing useful healthcare access and related services to vast populations. For example, customers of the National Health Service in the UK have access to a chatbot to face triage for “urgent but non-life-threatening conditions.”

Countries like Rwanda are seriously investing in this technology to extend healthcare access to remote populations. Such widespread availability and ease of use is only going to increase the uptake of this technology, rather than force people to make the kind of binary choice suggested in the EU proposal.

Bad chat happens

While there are many benefits of using chatbots in healthcare, there are some risks as well. For example, when a chatbot is used to triage patients, extreme care must be taken to make sure that the AI behind the triage decision has been trained on data that is accurately representative of the population that uses the chatbot.

This is not an easy thing to do, because securing historical data from the target population is not always possible for a number of reasons. These include local laws on the use of healthcare data, past data collection rigor and the costs of converting data.

Other concerns include the efficacy of chatbots in the target healthcare application, a lack of room for humans to intervene or override suggestions by chatbots, not enough transparency over the decisions made by chatbots, and questions about how well the data collected by the chatbot is secured. For a more detailed discussion on this, see this report.

Admittedly, the EU proposal does not go into the kind of detail we are presenting here and provides enough room to “reclassify” applications in higher-risk categories. Under the EU proposal, chatbots can be classified as "high risk" if they target “essential private and public services,” which presumably include healthcare. But chatbots used in healthcare must be classified as high-risk if concerns with this use, such as those mentioned above, are to be adequately addressed.

How to get a running start

As we make the suggestion to categorize chatbot use in healthcare as high-risk, we also recognise that most companies that are innovating in this space are small and medium enterprises that may not have the bandwidth or resources to develop the internal governance needed.

Even larger enterprises involved in the use of this technology such as hospitals, insurance companies, and government regulators, often lack the necessary understanding of the technology and its implications to be able to develop the governance actions needed at speed. Indeed, developing an internal governance framework takes considerable time and resources.

To address this challenge, we brought together stakeholders from all parts of the ecosystem – including chatbot developers, platform providers, medical professionals, academia, civil society and governments – to develop a governance framework, which is available for anyone to download and use.

The framework contains 10 principles (Figure 2 below), carefully curated by the multi-stakeholder participants. More importantly, the authors provide, for each of the 10 principles, a set of actions that can be used to “operationalize” the principles.

Figyre 2: Out-of-the box principles available in the Chatbots RESET framework Image: World Economic Forum

By incorporating the actions in their chatbot development or deployment plan, the various players in the ecosystem (developers, providers and regulators) can quickly set themselves on a path to compliance with upcoming regulations, such as that from EU.

Finally, the use of the framework gives small and medium enterprises the opportunity to demonstrate leadership in responsible use of AI (chatbots) in healthcare, besides providing them a running start.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Fourth Industrial Revolution

Share:
The Big Picture
Explore and monitor how Fourth Industrial Revolution is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

How to shape a sustainable future in space through responsible action

Olga Stelmakh-Drescher

November 4, 2024

Europe's data centre power demands plus other technology news to know

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum