Emerging Technologies

How to make sure the future of AI is ethical

ATTENTION EDITORS - IMAGE 1 OF 22 OF PICTURE PACKAGE '7 BILLION, 7 STORIES - OVERCROWDED IN HONG KONG. SEARCH 'MONG KOK' FOR ALL IMAGES - People cross a street in Mong Kok district in Hong Kong, October 4, 2011. Mong Kok has the highest population density in the world, with 130,000 in one square kilometre. The world's population will reach seven billion on 31 October 2011, according to projections by the United Nations, which says this global milestone presents both an opportunity and a challenge for the planet. While more people are living longer and healthier lives, says the U.N., gaps between rich and poor are widening and more people than ever are vulnerable to food insecurity and water shortages.   Picture taken October 4, 2011.   REUTERS/Bobby Yip   (CHINA - Tags: SOCIETY) - RTR2SQJP

Facial recognition provides a window in the ethics of artificial intelligence, writes Mike Loukides. Image: REUTERS/Bobby Yip

Mike Loukides
Vice President of Content Strategy, O'Reilly Media, Inc

A few weeks ago, I wrote a post on the ethics of artificial intelligence. Since then, we've been presented with an excellent example to reflect on: the use of face recognition to identify people likely to commit crimes. (There have been a number of articles about this research; I'll only link to this one.)

In my post, I said that we need to discuss what kind of society we want to build. I'm reasonably confident that, even under the worst societal conditions, we don't want a society where you can be imprisoned because your eyes are set too closely together. The article in New Scientist shows that researchers are making the right objections: the training data for crimnals and non-criminals was taking from two different sources; ethnicity issues may be at play; and that we're in danger making AI into "21st century phrenology," or "mathwashing."

AI Landscape
Image: CB Insights

I also say that an AI developer can choose what projects to work on, but that it's important that research not go behind closed doors, becoming opaque to the public and leaving everyone outside of those doors vulnerable to whatever happens inside. That leads me to suggest going a few steps further. While researchers and developers can certainly choose not to participate in projects they object to, there are useful ways to go beyond non-involvement:

Some researchers have worked on ways to use hair style, coloring, and other cosmetics to defeat face recognition. That's certainly a constantly escalating battle: what works now probably won't work a year from now. But more important, it requires understanding what face recognition is doing, how it works, and making that public knowledge.Abe Gong's work on COMPAS and Cathy O'Neil's work on data-driven teacher evaluation expose the machinery by which math-driven bias works. Gong's distinction between the statistical and human definitions of "bias" is particularly important: it's easy to be statistically unbiased while humanly unfair. O'Neil points out that it's easy to create systems in which you can only win by gaming the system, and that people who try to play fair are inevitably losers. We need many more researchers doing work like this: we need to understand how machine learning and AI are used, what the consequences are, and make that public knowledge.

So, researchers who opt out can also choose to actively subvert the system, or they can work to expose the flaws built into the system. Both functions are necessary.

As New Scientist points out, "the majority of U.S. police departments using face recognition do little to ensure that the software is accurate." Police departments have neither the expertise nor the inclination to critically evaluate software that claims to make their jobs easier. "This is magic that will make your job easier" is a tempting sales pitch for people who are already doing a hard job. It's way too easy for an uninformed official to fantasize about AI systems that will detect terrorists. It takes someone who isn't ignorant about AI to point out the problems with such a proposal, not the least of which is that the number of terrorists is so small that it would be impossible to build a good data set for training. And even with good training data, it's very hard to imagine a system with fewer than 5% false positives (roughly 16 million Americans, roughly 370 million people worldwide)—and such an error-prone system would be worse than useless.

Staying away from problem topics is never an answer; more than ever, we need AI researchers who are committed to building the future we want, rather than the future we're likely to get. That includes researchers who are actively trying to defeat AI systems as well as researchers who are exposing their inadequacies. Neither group can work from a position of ignorance. Doing so guarantees that we will be the victims, rather than the beneficiaries, of AI.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Innovation

Share:
The Big Picture
Explore and monitor how Innovation is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum