Emerging Technologies

Building fairer data systems: Lessons from addressing racial bias in healthcare

The lessons we're learning from addressing racial bias in healthcare algorithms can and should inform how we approach data equity across other sectors.

The lessons we're learning from addressing racial bias in healthcare algorithms can and should inform how we approach data equity across other sectors. Image: Unsplash/Hush Naidoo Jade Photography

JoAnn Stonier
Mastercard Fellow, Data and Artificial Intelligence (AI), Mastercard
Lauren Woodman
CEO, DataKind
Karla Yee Amezaga
Lead, Data Policy, World Economic Forum
  • Historical algorithm bias in healthcare reveals a critical need for data equity.
  • A study found that correcting racial bias in healthcare could increase the percentage of Black patients receiving additional care from 17.7% to 46.5%.
  • Now, a new framework from the World Economic Forum’s Global Future Council on Data Equity guides the creation of fairer data systems across industries.

In an era where data-driven decision-making increasingly shapes our world, ensuring fairness and equity in these systems has never been more critical. The training of AI models and other data systems has, at times, been responsible for perpetuating historical biases in the outputs of the latest data systems.

A report by the World Economic Forum’s Global Future Council on Data Equity proposes a new framework for defining and implementing data equity as a response to the risk of perpetuating old biases in new technology.

The hidden bias in healthcare algorithms

A startling example of such bias came with the analysis of an algorithm used in the US healthcare system. The research revealed the algorithm had been systematically disadvantaging Black patients in need of complex care.

The root cause of this lay in the data used to train the algorithm, which reflected the fact that Black patients historically received less expensive treatments due to income inequality and other barriers to healthcare access.

The study concluded that correcting this bias could increase the percentage of Black patients receiving additional care from 17.7% to 46.5%. This case underscores the critical need for rigorous algorithm auditing and cross-sector collaboration to eliminate such biases in decision-making processes.

Have you read?

Defining data equity: A shared responsibility

There is currently no globally agreed standard or definition of data equity. In order to move forward on solving the problem of bias, the Forum’s Global Future Council on Data Equity has drawn up a comprehensive definition:

"Data equity can be defined as the shared responsibility for fair data practices that respect and promote human rights, opportunity and dignity. Data equity is a fundamental responsibility that requires strategic, participative, inclusive and proactive collective and coordinated action to create a world where data-based systems promote fair, just and beneficial outcomes for all individuals, groups and communities. It recognizes that data practices – including collection, curation, processing, retention, analysis, stewardship and responsible application of resulting insights – significantly impact human rights and the resulting access to social, economic, natural and cultural resources and opportunities.”

This definition highlights a crucial point: data equity isn't just about the numbers. It's about how those numbers impact real people's lives. It encompasses the entire journey of data, from how it's collected and processed to how it's used and who benefits from its insights.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

A framework for implementation

Implementing data equity requires a structured approach. The data equity framework, developed by the Global Future Council, offers a tool for reflection, research and action. It's built on three main pillars: data, people and purpose.

Data Equity Framework
The data equity framework is built around three core principles. Image: Global Future Council on Data Equity

The three core pillars of the framework break down as follows:

The nature of the data: This pillar focuses on how sensitive the information is and who can access it. For instance, health records are highly sensitive and require strict protection.

The purpose of data use: This pillar aims to consider factors like trustworthiness, value, originality and application. Is the data being used ethically? Does it provide genuine value to society?

The people involved: This addresses the relationships between data collectors, processors, and subjects and the responsibilities that come with data handling. It also considers the expertise of those handling the data and, crucially, who's held accountable for its use.

Putting theory into practice

If we return to the example of problematic healthcare data above, we can see how these principles might be applied. The Global Future Council’s report has the following initial recommendations to remove bias.

  • Input stage: Collect more comprehensive health data, including direct measures of health status and barriers to healthcare access. Audit input variables for potential proxy discrimination.
  • Process stage: Maintain transparency in data collection and algorithmic scoring processes.
  • Output stage: Regularly audit the impact of algorithmic decisions on patient outcomes across different racial groups. Empower clinicians to flag potentially biased or incorrect predictions.
Algorithm auditing to improve access to healthcare.
Algorithm auditing to improve access to healthcare. Image: World Economic Forum

The path forward

Building fairer data systems is not a one-time fix – it's an ongoing journey that demands monitoring, collaboration and a continued, unwavering commitment to equity at every step. The lessons we're learning from addressing racial bias in healthcare algorithms can and should inform how we approach data equity across other sectors.

Contributing authors: Simon Torkington, Senior Writer, Forum Agenda and Stephanie Teeuwen, Specialist, Data Policy and AI, Centre for the Fourth Industrial Revolution, World Economic Forum.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:
Emerging TechnologiesEquity, Diversity and Inclusion
Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Closing the AI equity gap: Trust and safety for sustainable development

Keyzom Ngodup Massally and Jennifer Louie

December 3, 2024

Why we're heading back to the Moon - and on to Mars

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum