Emerging Technologies

It’s time to change the debate around AI ethics. Here's how

Woman, colourful computer code; AI technology development.

Misinformation about the development, complexity and riskiness of AI technology is preventing proper debate about its development. Image: Unsplash / @thisisengineering

Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
Edward Kwartler
VP, Trusted AI, DataRobot
Sarah Khatry
Managing Director, AI Ethics, DataRobot
  • Heated debate about the development of Artificial Intelligence (AI) is often affected by ethical concerns that can create fear about this type of technology.
  • Misinformation about the development, complexity and riskiness of AI technology are preventing proper debate on the issue.
  • This technology can and should be under our control, however, and this realisation will help highlight the benefits of continued AI development.

The current conversation around AI, ethics and the benefits for our global community is a heated one. The combination of high stakes and a complex, rapidly-adopted technology has created a very real state of urgency and intensity around this discussion.

Promoters of the technology love to position AI as a welcome disruptor that could bring about a global revolution. Meanwhile, detractors lean into the potential for disaster: the possibility of AI super-intelligence, thorny ethical questions like the classic trolley problem, and the very real consequences of algorithmic bias.

It’s all too easy to get caught up in the hype and create a situation whereby the world does not fully benefit from the development of AI technology. Instead, we should take a moment to assemble a critical perspective on the many voices fighting for our attention on AI ethics.

We should take a moment to assemble a critical perspective on the many voices fighting for our attention on AI ethics.

Ted Kwartler & Sarah Khatry, DataRobot; Kay Firth-Butterfield, World Economic Forum

Some of these voices belong to businesses that know they have been too slow in adopting AI technology. Others come from businesses that dove into AI early and have benefited from the confusion and a lack of regulation to pursue bad practices. Finally, in this day and age of influencers, there are those who broach the subject of AI ethics to grow their personal brand, sometimes without the required expertise.

Establishing the facts

Clearly, it’s a minefield out there, but we must brave it. This conversation is too important and too vital to neglect. With that in mind, here are some key facts that should be used to help inform this debate:

1. Reports about the dawn of Artificial General Intelligence (AGI) have been grossly exaggerated.

AGI refers to a broader form of machine intelligence than standard AI. It covers machines with a range of cognitive capabilities and the ability to learn and plan for the future. It’s the real-life realisation of the technology of science fiction books and movies, where computers rival humans in terms of intelligence and reason.

The more we have learned about AI over the decades, however, the less optimistic our estimates of AGI’s arrival have become. Instead, almost all AI systems in our modern world belong to a subcategory called machine learning (ML), which is extremely narrow and learns only through example. These machines do not think independently.

In fact, some of the tools that currently market themselves as AI are actually far older than ML. They are based on simpler statistical, expert or logic-based algorithms. However, our cultural overemphasis on intelligence promotes the personification of AI, diminishing human accountability.

2. The concept of "AI as a black box" is a myth.

ML algorithms can certainly vary in complexity, with some lending themselves more readily to human interpretation than others. That said, a variety of tools and techniques have been developed to probe even the most opaque algorithms and quantify how they respond to different inputs. The issue is that these tools can be quite technical for some algorithm stakeholders.

When an AI system is too poorly understood to be relied upon, it should probably not be deployed to sensitive situations. In such circumstances, further vetting should be performed or behavioural guardrails implemented to ensure the system can be deployed in a way that is clear and safe for users and other stakeholders. "AI as a black box" should never be used as an excuse to absolve human decision-makers of responsibility.

3. AI is not the first technology to promise both great risk and great opportunity.

Beyond hot-button moral quandaries such as trolley problems, AI now faces another class of ethical questions. These issues are perhaps quieter, or less flashy, but they will ultimately have a broader human impact.

Properly addressing these questions will require lucid, calm and holistic evaluations of AI using the methodology of systems safety, which identifies safety-related risks and uses design or processes to manage them. Nuclear power, aviation, and biomedicine, among many other industries, have evolved into safe and reliable industries, in large part due to the rigorous implementation of such risk-based systems safety frameworks.

Maintaining control of AI’s development

We need to see and analyse this technology as it really is. The simple truth remains that all AI in our current and foreseeable future is composed of ML-based systems, that is, advanced statistical algorithms governed by code and people. These systems can and should be under our control. Risks can be enumerated, mitigated and monitored, transforming a crisis of confusion into a mature technology in service of our shared advancement.

Those who pretend we are not capable of governing and responsibly deploying [AI] promote a falsehood.

Ted Kwartler & Sarah Khatry, DataRobot; Kay Firth-Butterfield, World Economic Forum

Increasingly, this message is finding a platform and it's beginning to shape AI’s development meaningfully. The latest proposed regulations from the European Union, for example, take significant steps in the right direction by defining high-risk use cases, for example. The data science community wants to build models that align to societal values and improve outcomes. This well thought-out proposal from the EU will enable innovation and industry growth by standardising expectations among practitioners.

Those who pretend we are not capable of governing and responsibly deploying this technology promote a falsehood. The AI industry certainly faces a major challenge in pushing the boundaries of this technology now and in the future. Through partnership, clarity, and pragmatism, we can be ready to face this challenge.

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Emerging Technologies

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

The rise of ‘AI agents’: What they are and how to manage the risks

Kate Whiting

December 16, 2024

Navigating the AI Frontier: A Primer on the Evolution and Impact of AI Agents

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum