Emerging Technologies

Why embracing human rights will ensure AI works for all

A robot, built by a German artifical intelligence centre, operates a switchboard at the CeBit computer fair, Hanover, Germany.

A robot, built by a German artifical intelligence centre, operates a switchboard at the CeBit computer fair, Hanover, Germany. Image: REUTERS/Fabrizio Bensch

Sherif Elsayed-Ali
Director, AI for Climate, Element AI

Dilbert: Wow! According to my computer simulation, it should be possible to create new life forms from common household chemicals
Dogbert: This raises some thorny issues.
Dilbert: You mean legal, ethical and religious issues?
Dogbert: I was thinking about parking spaces.

Artificial intelligence is a wonderful field of technology. This combination of maths, data and cognitive science is being used to improve health diagnostics, reduce power consumption and discover novel insights in numerous sectors. It’s also portrayed as an existential threat to humanity and a harbinger of a dystopian future of social engineering and robot overlords. If the benefits of AI are over-hyped, so are the doomsday scenarios – but these extremes reflect the real choices that face us: we deliberately steer this powerful set of technologies to benefit humanity, or we can cross our fingers and hope for the best.

Some of the challenges posed by AI are unique: the ability of software and machines to make complex decisions without human input (past the design and development phase) is probably the most important and the one that scares people the most. But other challenges have existed with software and data systems for a long time. The current fascination with this technology, and the fact that businesses are putting a lot of money and effort to take advantage of it, is an opportunity to tackle the human rights challenges not only around AI, but also around data and digital technology more broadly.

The impact of AI on human rights has got a few people thinking; their thoughts have culminated in a set of recommendations on how to prevent discriminatory outcomes in machine learning (machine learning, a subset of the field, is today’s dominant AI technology).

The recommendations, published in a white paper in March this year, focus on the business world’s responsibility developing and deploying machine learning, and provide a set of principles based on the UN Guiding Principles on Human Rights. They are four in number:

1. Active inclusion

When AI systems are developed, they should take account of who will be affected by them. For example, medical diagnostics is a very promising area where AI could speed up and improve how diseases are detected. But such systems should take into account differences in specific populations, age groups or environments. Equally, diversity in teams developing these technologies is very important in producing technology that works for more people.

2. Fairness

Machine learning relies on very large amounts of data. A machine-learning AI system is developed using what is called training data: these are large data sets (for example of images, financial data, personal information) that are categorized and fed into an algorithm so that it is “trained” to recognize certain aspects or find trends that may otherwise be invisible. But as we know, there are numerous historical and present biases in data – from over-policing of minority groups to discrimination in access to loans. If these biases are not corrected, and algorithms are not specifically designed to counter them, there is a very high risk that an AI system will amplify existing discrimination.

3. Right to understanding

AI is not only being used to help decide on loans, insurance claims and job applications, it is also increasingly used in public functions, including in the justice system. For example, in the US courts and correction departments are using AI to aid decisions about bail, sentencing and parole. Yet these algorithms are often a “black box”, meaning their inner workings are not known to the people using them, as they are proprietary pieces of software developed by someone else. If an algorithm is going to influence whether you get a mortgage, get a job interview, or – if you are in the unfortunate situation of having a brush with the law – whether you get bail or not, you would want to be sure its assessments are fair. This is impossible if the people using it (a bank, employer, court) have no idea how it works. We should always know when AI is aiding or making decisions and be able to have an explanation of why it was made.

4. Access to redress

Finally, and building on the right to understanding, we should not lose our right to effectively challenge a decision we believe to be unfair because decision-making in commercial or public functions is automated. We should have an adequate remedy when a wrong decision has been made, and there should be clear procedures for appealing them. AI developers should regularly monitor and test their applications to identify any discriminatory trends in their uses and correct them.

Have you read?

As ethical principles for the development and use of AI are debated and developed, it’s critical they build on the human rights framework, which is the only internationally agreed one protecting freedom, justice, dignity and equality. As we celebrate the 70th anniversary of the Universal Declaration of Human Rights this year, this framework represents seven decades of practice and evolving standards – it should be the core around which AI ethics are defined and developed.

In the dialogue at the top of this piece, Dogbert doesn’t pay attention to legal and ethical implications of creating artificial life forms – and, rather, only worries about how to deal with the potential increase in demand for parking spaces. This is the choice businesses have – focusing on the bottom line and short-term goals, or taking the long view and making sure that the technologies they develop benefit society. This choice is what will define our future.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Digital Communications

Share:
The Big Picture
Explore and monitor how Digital Communications is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum