How to make sure the future of AI is ethical
Facial recognition provides a window in the ethics of artificial intelligence, writes Mike Loukides. Image: REUTERS/Bobby Yip
A few weeks ago, I wrote a post on the ethics of artificial intelligence. Since then, we've been presented with an excellent example to reflect on: the use of face recognition to identify people likely to commit crimes. (There have been a number of articles about this research; I'll only link to this one.)
In my post, I said that we need to discuss what kind of society we want to build. I'm reasonably confident that, even under the worst societal conditions, we don't want a society where you can be imprisoned because your eyes are set too closely together. The article in New Scientist shows that researchers are making the right objections: the training data for crimnals and non-criminals was taking from two different sources; ethnicity issues may be at play; and that we're in danger making AI into "21st century phrenology," or "mathwashing."
I also say that an AI developer can choose what projects to work on, but that it's important that research not go behind closed doors, becoming opaque to the public and leaving everyone outside of those doors vulnerable to whatever happens inside. That leads me to suggest going a few steps further. While researchers and developers can certainly choose not to participate in projects they object to, there are useful ways to go beyond non-involvement:
Some researchers have worked on ways to use hair style, coloring, and other cosmetics to defeat face recognition. That's certainly a constantly escalating battle: what works now probably won't work a year from now. But more important, it requires understanding what face recognition is doing, how it works, and making that public knowledge.Abe Gong's work on COMPAS and Cathy O'Neil's work on data-driven teacher evaluation expose the machinery by which math-driven bias works. Gong's distinction between the statistical and human definitions of "bias" is particularly important: it's easy to be statistically unbiased while humanly unfair. O'Neil points out that it's easy to create systems in which you can only win by gaming the system, and that people who try to play fair are inevitably losers. We need many more researchers doing work like this: we need to understand how machine learning and AI are used, what the consequences are, and make that public knowledge.
So, researchers who opt out can also choose to actively subvert the system, or they can work to expose the flaws built into the system. Both functions are necessary.
As New Scientist points out, "the majority of U.S. police departments using face recognition do little to ensure that the software is accurate." Police departments have neither the expertise nor the inclination to critically evaluate software that claims to make their jobs easier. "This is magic that will make your job easier" is a tempting sales pitch for people who are already doing a hard job. It's way too easy for an uninformed official to fantasize about AI systems that will detect terrorists. It takes someone who isn't ignorant about AI to point out the problems with such a proposal, not the least of which is that the number of terrorists is so small that it would be impossible to build a good data set for training. And even with good training data, it's very hard to imagine a system with fewer than 5% false positives (roughly 16 million Americans, roughly 370 million people worldwide)—and such an error-prone system would be worse than useless.
Staying away from problem topics is never an answer; more than ever, we need AI researchers who are committed to building the future we want, rather than the future we're likely to get. That includes researchers who are actively trying to defeat AI systems as well as researchers who are exposing their inadequacies. Neither group can work from a position of ignorance. Doing so guarantees that we will be the victims, rather than the beneficiaries, of AI.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Innovation
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024