Emerging Technologies

What the EU is doing to foster human-centric AI

A man looks at a demonstration of human motion analysis software at the stall of the artificial intelligence solutions maker Horizon Robotics during the Security China 2018 exhibition on public safety and security in Beijing, China October 24, 2018.   REUTERS/Thomas Peter - RC1A9170A420

AI is not invulnerable to mistakes and biases. Image: REUTERS/Thomas Peter

Saverio Puddu
Managing Associate, Linklaters
Ana Isabel Rollán Galindo
Global Senior Manager, BBVA
Kay Firth-Butterfield
Senior Research Fellow, University of Texas at Austin
  • The EU’s first draft Regulation on AI is part of a wider effort to develop human-centric AI by eliminating mistakes and biases to ensure it is safe and trustworthy.
  • The regulation includes requirements to minimise the risk of algorithmic discrimination, in particular in relation to the quality of data sets used for the development of AI systems.
  • It also introduces human oversight of certain AI systems to prevent or minimise risks to health, safety or fundamental rights that may emerge when an AI system is used.

Far from being science fiction, artificial intelligence is already part of our lives. From self-driving vehicles to virtual personal assistants, AI is a reality.

Beyond simplifying our lives, AI promises to bring an array of economic and social benefits for society. From healthcare to mobility, across public sector and finance, AI is a fast-evolving technology that can solve some of the world's biggest challenges. It can lead to social and environmental beneficial outcomes and, at the same time, support any kind of businesses by helping them in better knowing and engaging with their customers, improving prediction, speeding up production, and optimising operations and resource allocation.

Have you read?

However, together with countless socio-economic benefits, it is not surprising that AI can also introduce new risks into our society and that, among those challenges, ethical implications are probably the most subtle and insidious.

To enable scientific breakthroughs and to ensure that AI technologies are at the service of all citizens, improving their lives while respecting their rights, the European Commission proposed the first draft Regulation on AI.

The draft legislation is only a part of a major European strategy for AI presented by the European Commission in 2018 to address the opportunities and challenges of AI and promoting, at the same time, European values. However, pursuing the twin objective of promoting the development – and deployment – of AI and of addressing the risks associated with certain uses of this new technology, implies complex evaluation and difficult ethical and regulatory choices, especially when the risk of discrimination arises.

Not immune to mistakes and biases

AI performs functions that previously could only be done by humans. Consequently, we are progressively subject to decisions taken by – or with the assistance of – AI systems. However, believing that AI is by default immune to bias, and thus not able to discriminate against individuals or groups of individuals, is a dangerous mistake: AI can perpetuate certain biases or even amplify them.

As noted by the European Commission in its 2020 White Paper on AI, bias and discrimination are inherent risks of any societal or economic activity, and AI is not invulnerable to mistakes and biases. Indeed, depending on the data input that is used to train and test AI systems, their outputs can be biased.

For instance, certain AI algorithms deployed for predicting criminal recidivism display gender and racial bias, demonstrating different recidivism prediction probability for women versus men or for nationals versus foreigners.

Similarly, certain AI-based facial recognition tools have demonstrated low errors for determining the gender of lighter-skinned men but high errors in determining gender for darker-skinned, thus displaying gender and racial bias.

Undoubtedly, biased results can lead to breaches of our fundamental rights, from our freedom of expression or association (see the effects of indiscriminate surveillance made by facial recognition systems) to non-discrimination, especially when AI biased decisions are based on sex, racial or ethnic origin, religion, disability, age or sexual orientation (for instance in access to employment or being approved a loan).

AI-biased results and discriminatory effects are mainly due to two factors:

  • the use of low-quality training data sets (e.g. the AI system is trained using mostly data from men leading to suboptimal results in relation to women);
  • the lack of transparency of AI (better known as the “opaqueness of AI”) which makes it difficult to identify possible flaws of the AI system’s design.

The EU approach

To mitigate the risk of erroneous or biased AI-assisted decisions in critical areas, such as healthcare and employment, the European Commission envisages a multifaced approach.

The proposed draft Regulation includes requirements that aim to minimise the risk of algorithmic discrimination, in particular in relation to the quality of data sets used for the development of AI systems, accompanied by obligations for testing, risk management, documentation and human oversight throughout the entire AI systems’ lifecycle.

Indeed, when data is gathered, it may reflect socially constructed biases, or contain inaccuracies, errors and mistakes. Data sets used by AI systems – both for training and operating – may also suffer from the inclusion of inadvertent historic bias which could lead to indirect discrimination. Similarly, the way in which AI systems are developed (e.g. the way in which the programming code of an algorithm is written) may also suffer from bias, often inherent in human developers.

Therefore, training, validation and testing data sets should be subject to appropriate data governance and management practices that should take into consideration:

  • the relevant design choices;
  • the data gathering;
  • the relevant data preparation processing operations;
  • the formulation of relevant assumptions, especially with respect to the information that the data are supposed to measure and represent;
  • the prior assessment of the availability, quantity and suitability of the required data sets;
  • the examination in view of possible biases;
  • the identification of any possible data gaps or shortcomings, and potential remediations.

Furthermore, to prevent outcomes entailing prohibited discrimination, each training, validation and testing data set should be relevant, representative, free of errors and complete. They should also have the appropriate statistical properties, including as regards the individuals or groups of individuals on which AI system is intended to be used, especially to ensure that all relevant dimensions of gender, ethnicity and other possible grounds of prohibited discrimination are appropriately reflected in those data sets.

Interestingly, the proposal also provides that, to the extent it is strictly necessary for the purposes of ensuring AI bias monitoring, detection and correction, AI providers can process special categories of personal data – such as health data and those revealing ethnic origin, political opinions, religious beliefs or sexual orientation – subject to appropriate safeguards. These include technical limitations on the re-use and use of state-of-the-art security and privacy-preserving measures, such as pseudonymisation, or encryption where anonymisation may significantly affect the purpose pursued.

In addition to said ex ante testing obligations, the proposal introduces accountability duties aimed at demonstrating that the AI systems comply with the requirements set out by the proposed legislation. Technical documentation of the AI system should be drawn up before that system is placed on the market and kept up-to date. Similarly, AI systems should be designed and developed with capabilities enabling the automatic recording of events (i.e. logs) while the AI system is operating.

Those obligations are evidently intended to allow ex post controls and to facilitate the respect of fundamental rights by ensuring transparency and traceability of the AI system’s functioning throughout its lifecycle. Indeed, to increase transparency and minimise the risk of bias or error, the European Commission suggests that AI systems should be developed in a manner which allows humans to understand the basis of their actions. For the same purposes, it requires high-risk AI systems to be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.

In view of the risks that they may pose, the draft Regulation also introduces human oversight of certain AI systems aimed at preventing or minimising the risks to health, safety or fundamental rights that may emerge when an AI system is used, in particular when such risks persist notwithstanding the application of other above-mentioned requirements.

A difficult balance to strike

With its new legislative proposal, the EU has made clear – once again – that AI comes with both the potential to transform our world for the better and, at the same time, the potential to breach our fundamental rights, such as through gender-based or other kinds of discrimination. However, biases, opacity, complexity, unpredictability and the partially autonomous behaviour of AI systems are only a few of the main factors posing risks to citizens’ rights. With this proposal, the EU places a clear and safe framework for companies to develop and implement AI, facilitating innovation and trust for customers and users.

Striking a balance between the protection of citizens’ freedoms and EU economic competitiveness is a hard choice and requires complex evaluations. As recently stated by Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age: “On Artificial Intelligence, trust is a must, not a nice to have”. Trust requires transparency, transparency requires accountability.

With this draft Regulation, the EU is promoting innovation in the area of AI while supporting the development and uptake of ethical and trustworthy AI across its economy. The strategy adopted by the EU is clear and it places people at the centre of the development of AI, thus designing a human-centric AI.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

The New Data Economy

Related topics:
Emerging TechnologiesEquity, Diversity and Inclusion
Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

5 ways to achieve effective cyber resilience

Filipe Beato and Jamie Saunders

November 21, 2024

Why AI is Southeast Asia's new engine for profitable growth

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum