Emerging Technologies

5 core principles to keep AI ethical

The UK has proposed controlling AI with a code of ethics Image: REUTERS/Michaela Rehle

Rob Smith
Writer, Forum Agenda

Science-fiction thrillers, like the 1980s classic film The Terminator, illuminate our imaginations, but they also stoke fears about autonomous, intelligent killer robots eradicating the human race.

And while this scenario might seem far-fetched, last year, over 100 robotics and artificial intelligence technology leaders, including Elon Musk and Google's DeepMind co-founder Mustafa Suleyman, issued a warning about the risks posed by super-intelligent machines.

In an open letter to the UN Convention on Certain Conventional Weapons, the signatories said that once developed, killer robots - weapons designed to operate autonomously on the battlefield - “will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

SpaceX and Tesla founder Elon Musk signed an open letter on AI ethics Image: REUTERS/Aaron P. Bernstein

The letter states: “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.”

AI must be a force for good - and diversity

This week, the United Kingdom government published a report, commissioned by the House of Lords AI Select Committee, which is based on evidence from over 200 industry experts. Central to the report are five core principles designed to guide and inform the ethical use of AI.

The first principle argues that AI should be developed for the common good and benefit of humanity.

The report’s authors argue the United Kingdom must actively shape the development and utilisation of AI, and call for “a shared ethical AI framework” that provides clarity against how this technology can best be used to benefit individuals and society.

They also say the prejudices of the past must not be unwittingly built into automated systems, and urge that such systems “be carefully designed from the beginning, with input from as diverse a group of people as possible.”

Intelligibility and fairness

The second principle demands that AI operates within parameters of intelligibility and fairness, and calls for companies and organisations to improve the intelligibility of their AI systems.

“Without this, regulators may need to step in and prohibit the use of opaque technology in significant and sensitive areas of life and society,” the report warns.

New recruit
Can robots and humans live in harmony? Image: REUTERS/Francois Lenoir

Data protection

Third, the report says artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.

It says the ways in which data is gathered and accessed need to be reconsidered. This, the report says, is designed to ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy.

“Large companies which have control over vast quantities of data must be prevented from becoming overly powerful within this landscape. We call on the government ... to review proactively the use and potential monopolisation of data by big technology companies operating in the UK.”

Flourishing alongside AI

The fourth principle stipulates all people should have the right to be educated as well as be enabled to flourish mentally, emotionally and economically alongside artificial intelligence.

For children, this means learning about using and working alongside AI from an early age. For adults, the report calls on government to invest in skills and training to negate the disruption caused by AI in the jobs market.

Automation could eliminate millions of jobs globally Image: Statista

Confronting the power to destroy

Fifth, and aligning with concerns around killer robots, the report says the autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

“There is a significant risk that well-intended AI research will be misused in ways which harm people," the report says. "AI researchers and developers must consider the ethical implications of their work.”

By establishing these principles, the UK can lead by example in the international community, the authors say.

“We recommend that the government convene a global summit of governments, academia and industry to establish international norms for the design, development, regulation and deployment of artificial intelligence.”

Have you read?
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

The Digital Economy

Related topics:
Emerging TechnologiesEconomic Growth
Share:
The Big Picture
Explore and monitor how The Digital Economy is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

10 start-ups to watch in the longevity economy

Hope French and Michael Atkinson

November 7, 2024

How AI could help modernize pension and retirement systems

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum