We’re failing at the ethics of AI. Here’s how we make real impact
The rollout of AI technology is full steam ahead, and its ethics urgently need to catch up. Image: WEF
Listen to the article
- The global COVID-19 crisis has acted as a world-wide accelerator for the rollout of artificial intelligence (AI) initiatives.
- The ethics and governance of AI systems are unclear.
- We need to advance in three main issues to make a real impact.
The global COVID-19 crisis has acted as a world-wide accelerator for the rollout of AI initiatives. Technologies that would’ve taken place over five years have taken place over six months.
While the pandemic has embedded AI into our lives at lightspeed, it’s also amplified the urgency to understand the rules and ethics concerning it. For instance, technology companies, tasked with biometric tracking and tracing applications, are now in possession of massive amounts of our personal biodata and no clear set of rules on what to do with it and how to protect it.
As a result, companies and stakeholders find themselves putting out fires, like cybersecurity failures, prolific dis- and misinformation spreading and indiscriminate data sales, all easily preventable issues.
With such high stakes, why aren’t the rules and governance of AI systems more clear?
It’s not from lack of effort. Over the last few years, a surge of principles and guidance to support the responsible development and use of AI have emerged without any significant change.
For impact, we must drill down on three main issues:
1. Broaden the existing dialogues around the ethics and rules of the road for AI.
The conversation about AI and ethics needs to be cracked wide open to understand the subtleties and life cycle of AI systems and their impacts at each stage.
Too often, these talks lack a far enough reach, and focus solely on the development and deployment stages of the life cycle, although many of the problems occur during the earlier stages of conceptualization, research and design.
Or else, they fail to comprehend when and if an AI system will operate at a stage of maturity required to avoid failure within complex adaptive systems.
Another problem is that companies and stakeholders may focus on the theater of ethics, seeming to promote AI for good while ignoring aspects that are more fundamental and problematic. This is known as "ethics washing," or creating a superficially reassuring but illusory sense that ethical issues are being addressed to justify pressing forward with systems that end up deepening problematic patterns.
Let transparency dictate ethics. There are many tradeoffs and grey areas within this conversation. Let’s lean into those complex grey areas.
While "ethics talk" is often about underscoring the differing tradeoffs that correspond with various courses of action, true ethical oversight rests on addressing what’s not being accommodated by the options selected.
This vital – and often overlooked – part of the process is a stumbling block for those trying to address the ethics of AI.
2. The talk about AI ethics is not being translated into meaningful action.
Too often, those in charge of developing, embedding and deploying AI systems fail to understand how they work or what potential they might have to shift power, perpetuate existing inequalities and create new ones.
Overstating the capabilities of AI is a well-known problem in AI research and machine learning, and it’s led to a complacency toward understanding the actual problems they’ve been designed to solve, as well as identifying potential problems downstream. The belief that incompetent and immature AI system once deployed can be remedied by a human on the loop or assumption that an antidote exists, especially compatible with cybersecurity, is an erroneous and potentially dangerous illusion.
We see this lack of comprehension demonstrated in our decision-makers, who fall for a myopic, tech-determinist narrative and apply tech-solutionist and optimization approaches to global, industry and societal challenges. They’re often blinded by what's on offer rather than focused on what the problem actually requires.
To have a true understanding of the ethics of AI, we need to listen to a much more inclusive cast of experts and stakeholders including those who grasp the potential downstream consequences and limitations of AI, such as the environmental impact of the resources required to build, train and run an AI system, its interoperability with other systems and the feasibility of safely and securely interrupting an AI system.
3. The dialogue about AI and ethics is confined to the ivory tower.
Concepts such as ethics, equality and governance can be viewed as lofty and abstract. We need to ground the AI conversation in discerning meaningful responsibility and culpability.
Anyone who assumes that AI systems are apolitical by nature would be incorrect, especially when systems are embedded in situations or confronted with tasks they were not created or trained for.
Structural inequalities are commonplace, like in the predictive algorithms used in policing that are often clearly biased. What’s more, the people who are most vulnerable to negative impacts are often not empowered to engage.
Part of the solution and the challenge is finding a shared language for re-conceptualizing ethics for unfamiliar and implicit tensions and tradeoffs. Large-scale technological transformations have always led to deep societal, economic and political change, and it’s always taken time to figure out how to talk about it publicly and establish safe and ethical practices.
However, we’re pressed on time.
Let’s work together to re-envision ethics for the information age and cut across siloed thinking in order to strengthen lateral and scientific intelligence and discourse. Our only viable course is a practical and participatory ethic, which ensures transparency, ascribes responsibility and prevents AI being used in ways that dictate rules and potentially cause serious harms.
Anja Kaspersen and Wendell Wallach are senior fellows at Carnegie Council for Ethics in International Affairs. Together, with an international advisory board, they direct the Carnegie Artificial Intelligence and Equality Initiative (AIEI), which seeks to understand the innumerable ways in which AI impacts equality, and in response, propose potential mechanisms to ensure the benefits of AI for all.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Artificial Intelligence
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Forum InstitutionalSee all
Gayle Markovitz and Vesselina Stefanova Ratcheva
November 21, 2024