World Economic Forum Unveils New Principles to Make Machine Learning More Human

Published
13 Mar 2018
2018
Share

Oliver Cann, Public Engagement, Tel.: +41 79 799 3405; oca@weforum.org

· New white paper by the World Economic Forum’s Global Future Council on Human Rights provides a framework for developers to prevent discrimination in the development of machine learning

· Paper draws on research and interviews carried out over a number of months and is designed as a tool for all businesses looking to employ artificial intelligence and automated decision-making

· Download the full report here

Geneva, 13 March 2018 – Strong standards are urgently needed to prevent discrimination and marginalization of humans in artificial intelligence. This is the finding of a new white paper, How to Prevent Discriminatory Outcomes

in Machine Learning, published today by the World Economic Forum’s Global Future Council on Human Rights.

The paper has been produced after a long consultation period and is based on research and interviews with industry experts, academics, human rights professionals and others working at the intersection of machine learning and human rights. The key recommendation for developers and all businesses looking to use machine learning is to prioritize non-discrimination by adopting a framework based on four guiding principles: active inclusion; fairness; right to understanding; and access to redress.

Recent examples of how machine learning has failed to prevent discrimination include:

· Loan services – applicants from rural backgrounds, who have less digital infrastructure, could be unfairly excluded by algorithms trained on data points captured from more urban populations

· Criminal justice – the underlying data used to train an algorithm may be biased, reflecting a history of discrimination.

· Recruitment – applications might filter out people from lower-income backgrounds, those who attended less prestigious schools, based on factors such as educational attainment status

“We encourage companies working with machine learning to prioritize non-discrimination along with accuracy and efficiency to comply with human rights standards and uphold the social contract,” said Erica Kochi, Co-Chair of the Global Future Council for Human Rights and Co-Founder of UNICEF Innovation.

“One of the most important challenges we face today is ensuring we design positive values into systems that use machine learning. This means deeply understanding how and where we bias systems and creating innovative ways to protect people from being discriminated against,” said Nicholas Davis, Head of Society and Innovation, Member of the Executive Committee, World Economic Forum.

The white paper is part of a broader workstream within the Global Future Council looking at the social impact of machine learning, such as the way it amplifies longstanding problems related to unequal access.

Notes to Editors

Read the Forum Agenda at http://wef.ch/agenda

Become a fan of the Forum on Facebook at http://wef.ch/facebook

Watch our videos at http://wef.ch/video

Follow the Forum on Twitter via @wef and @davos, and join the conversation using #wef

Follow our Instagram at http://wef.ch/instagram

Follow us on LinkedIn at http://wef.ch/linkedin

Subscribe to Forum news releases at http://wef.ch/news

Learn about the Forum’s impact on http://wef.ch/impact

Follow the conversation on WeChat using davos_wef

All opinions expressed are those of the author. The World Economic Forum Blog is an independent and neutral platform dedicated to generating debate around the key topics that shape global, regional and industry agendas.

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum