Emerging Technologies

AI has a bias problem. This is how we can solve it

A customer takes a picture as robotic arms collect pre-packaged dishes from cold storage, done according to the diners' orders, at Haidilao's new artificial intelligence hotpot restaurant in Beijing, China, November 14, 2018. Picture taken November 14, 2018. REUTERS/Jason Lee - RC1BD5453A90

We are increasingly reliant on AI - but is our fundamental approach to developing this technology misguided? Image: REUTERS/Jason Lee

James Golden
CEO, WorldQuant Predictive
This article is part of: World Economic Forum Annual Meeting

We have become increasingly reliant on artificial intelligence (AI) to help solve some of the world’s most pressing problems. From healthcare to economic modelling to crisis prediction and response, AI is becoming quite common and, in some cases, inherent in how we operate.

While the insights offered by AI are invaluable, we must also recognize that it is not a flawless system that will provide us with perfect answers, as many practitioners would have you believe. Our AI systems are the product of constructed algorithms that have, however inadvertently, inherited many of the biases that help to perpetuate the very global challenges we hope to solve. The result is that AI and machine learning are not purely agnostic processes of objective data analysis.

In order for these technologies to make progress, confront bias and help tackle significant global problems, we need to rethink our approaches to developing AI-enabled tools and move away from systems that impose models of our understanding on data. Instead, we need an iterative, diverse approach that can incorporate more perspectives and diversity of thought. Models developed from globally distributed intelligence networks may offer a way forward and more unique, unbiased approaches to tackling serious world issues.

Deconstructing bias in AI

The problems caused by our systems’ inherent bias have become more apparent as AI has become increasingly integrated into business. We are beginning to understand both the repercussions of using selective datasets and how AI algorithms can incorporate and exacerbate the unconscious biases of their developers.

Take, for example, Amazon’s aborted AI hiring and recruitment system. This system was supposed to help identify potential job candidates by analyzing hiring and employee success data, and highlighting those applicants that best met the characteristics of their most productive hires. Perhaps the AI could even use the data to identify certain traits or qualities about their best employees that Amazon’s HR team hadn’t considered.

What actually happened was that machine learning algorithms processed 10 years’ worth of applications, and perpetuated the very biases Amazon was seeking to counteract. You can see how the system rated male applicants higher than females, because trends in the data showed a historical preference for male candidates. The Amazon example was a case of reliance on limited datasets that reflected a historical bias, and which resulted in machine learning models that incorporated the biases inherent in the training data.

The same problems are emerging in other applications of AI as well, and are affecting decisions made on everything from loan applications to criminal sentencing. AI will not be seen as a useful tool to help solve these problems if we do not address the larger systemic issues in our data science development processes.

The flaw in our approach lies in the ways we are developing AI systems for commercial application. We are creating algorithms that are used to detect patterns in data, and we often use a top-down approach. As a recent WIRED cover story noted, our AI systems have become increasingly dependent on deep learning, a type of machine learning technique that has evolved over the past decade. The central challenge of deep learning when applied to pattern recognition is that it is beginning to reach a plateau of usefulness for certain classes of problems. Such models are entirely data-dependent with no evidentiary logic presented as part of their output - which means the AI is not able to intuit to solve certain problems or explain how it reached a conclusion. Furthermore, if that data is flawed by systematic historical biases, those biases will be replicated at scale. To borrow a phrase: bias in, bias out.

As developers become increasingly aware of these biases, they are attempting to address the situation through human countermeasures. These efforts include everything from leveraging more diverse data sets that can better account for what are seen as outliers to increasing diversity across the industry. As Google UX researcher Vyacheslav Polonski has pointed out in another blog, current efforts are focused on addressing the principles of representation, stewardship, protection and authenticity.

Still, these efforts are merely trying to fix problems embedded in the system when instead we should be looking to fix the system itself. As we continue down the path of AI reliance, we should pause to consider if our current models for AI development are moving us in the right direction, or if there are in fact other models that might be more productive and less prone to bias.

Rethinking AI

I would argue that the problems in our AI systems are not simply limitations of data or developers’ blind spots, but rather stem from our entire approach to how these systems are built. We have approached AI development from the top-down, largely dictated by the viewpoints of developed nations and first-world cultures. No surprise then that the biases we see in the output of these systems reflect the unconscious biases of these perspectives.

Diversifying data is certainly one step to alleviate those biases, as it would allow for more globalized inputs that may hold very different priorities and insights. But no amount of diversified data will fix all the issues if it is fed into a model with inherent biases,.

Have you read?

What’s needed is a system that can account for globally diversified perspectives, a distributed intelligence network that can adjust to new data, incorporate new models of thinking and benefit from cultural diversity. We need to move from labour arbitrage as a key value measure of technology development to ‘idea arbitrage’. At WorldQuant Predictive, a global research and prediction company, our founder Igor Tulchinsky believes that “talent is equally distributed, but opportunity is not”. Likewise, brilliant insights and ideas are global, but rarely are they visible or even represented in our current approach to model building.

Rather than top-down approaches that seek to impose a model on data that may be beyond its contexts, we should approach AI as an iterative, evolutionary system. If we flip the current model to be built-up from data rather than imposing upon it, then we can develop an evidence-based, idea-rich approach to building scalable AI-systems. The results could provide insights and understanding beyond our current modes of thinking.

The other advantage to such a bottom-up approach is that the system could be much more flexible and reactive. It could adapt as the data changes and as new perspectives are incorporated. Consider the system as a scaffold of incremental insights so that, should any piece prove inadequate, the entire system does not fail. We could also account for much more diversified input from around the globe, developing iterative signals to achieve cumulative models to which AI can respond.

The global problems we face today are unprecedented. They require recognition of new types of data, new methods of understanding information and new modes of thinking. AI is one of the best potential tools humanity has to confront these present and future challenges, but only if we don’t reinforce the mistakes of our past.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Data Science

Share:
The Big Picture
Explore and monitor how Data Science is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum