Emerging Technologies

Children’s ethical standards can help us build human-centric AI

New research has revealed what children think about AI — and it might just help us build better systems moving forward.

New research has revealed what children think about AI — and it might just help us build better systems moving forward. Image: KidsRights

Karolina La Fors
Generation AI Fellow, World Economic Forum and DesignLab University of Twente
Marc Dullaert
Chairman, KidsRights
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Tech and Innovation

Listen to the article

  • A survey of hundreds of Dutch children has revealed what they think about the role artificial intelligence should play in our society moving forward.
  • Children's unique ability to sense values and draw ethical boundaries can guide us in what it means to have a human-centric society and human-centric AI.
  • The majority of children surveyed wouldn’t mind having a robot salesperson — but they draw the line at having an AI doctor or police officer.

Children can play a surprisingly informative role in the development of artificial intelligence (AI) and society — but it must be done right.

Children’s relational and intuitive skills are necessary to experience life and grow through their human relationships. Although younger children are perhaps less aware of AI systems, they are highly aware of their human relationships.

By relying on their relational skills with humans, children can help us understand how to draw ethical, social and human-centric boundaries when interacting with AI . This potential, unique in childhood, can help to inform us about what having a human-centric society really means. However, the meaningful participation of children in the development of AI and in public dialogue lags behind.

Involving children in this societal shift should also mean engaging them in intergenerational dialogue about the impacts of AI systems in their lives. The rapid and widespread adoption of generative AI models has illustrated that AI-driven system innovations will inevitably affect human life. Experimentation with human skill-imitating is already underway, with AI systems often outperforming people on an unprecedented scale.

Have you read?

The DesignLab-KidsRights national survey, conducted in The Netherlands, examined which AI systems children were aware of, and what children would propose as ethical and social standards to adhere to in an AI-mediated society. Here’s what it found.

What kids think about AI

The representative study of 374 children aged between 4-16 years, with the largest group of respondents being between 6-13, years yielded important insights.

Children shared their enthusiasm and concerns regarding what AI systems currently mean and could mean in their daily lives and future.

When engaging them in thought experiments about social robots taking up social roles in different domains they are familiar with, more than half — 55.3% — of the children could imagine and wanted the assistance of a robot seller in shops, while 41% disliked the idea.

The majority thought robots could outperform humans as a seller.

Robot sellers were preferred by children for their perceived coolness and potential to outperform humans in their tasks. Out of those children less inclined to opt for robot sellers, the majority reported that they would miss human characteristics.

"No, because almost every seller could lose their job and you cannot chat nicely with a robot and real people are much nicer." — Girl, 7

When asked whether a police officer can be a robot, 54.8% were against it and 41.5% for it. The group that opposed the idea thought that robotic law enforcement could threaten their safety. Most children think a robot doctor is a bad idea: 61.3%, compared to 35.3% who would like to interact with one.

"No, then it all goes wrong and they can't run that fast. Suppose someone has got shot and the robot says: "'Can I help you with something' and 'Please, keep calm'." — Girl, 12

"A robot GP could transmit electricity when it touches me and a human doctor does not do that." — Boy, 8

Can a robot be your friend?

When asked whether a robot can become a friend, they indicated they would miss human characteristics, skills and values in their interactions. These must not be lost in the development of AI systems. AI must serve humanity, where humans are in control and importantly remain distinguishable from AI.

The children see AI as a solution for socially relevant problems. For example, a robot that helps children with dyslexia, or one that fishes out plastic from the ocean. But no robot can take up the context-overarching role of a friend. Friendship is clearly beyond any “functional” relationship, mainly due to the lack of human characteristics, skills, values and other relational aspects of human contact.

“If a robot was made to be my friend, it would only learn from me. Then how can I know how to make others happy or what sadness is? How can I learn and adapt if I only learn what I'm doing from a robot friend?" — Girl, 8

"If robots could get humour and human feelings then it could be possible." — Boy, 12

Children's views reveal ethical and social values

The findings of the survey can be boiled down into eight key standards children would demand for AI — and their hopes for human-centric AI and society in the future:

1. Human Literacy — “Robots lack human qualities”

2. Emotional intelligence — "Robots have no emotions"

3. Love and kindness — "Robots don't give love"

4. Authenticity — "Robots don't have opinions of their own"

5. Human Care and Protection — “Robots cannot offer consolation and comfort”

6. Autonomy — 'Robots should not take over the world'

7. AI in service — 'Robots must assist me”

8. Exuberance — "Robots should be able to play with me"

Integrating children's views into AI frameworks

Research shows that, in a world where technological developments present themselves as indispensable, broad and intergenerational dialogue is essential. Technological developments like AI, the Internet of Things and more require engaging in what human-centric AI means, and what it means to be a human-centric society.

Children can contribute ethical and social standards that adults often don't consider. The KidsRights report closes with recommendations for how children’s standards are indispensable resources for human-centric AI and society. They can inform top-down AI-relevant normative frameworks such as, for example, the Assessment List of the High-Level Expert Group on Trustworthy AI developed by the EU High-Level Expert Group on Trustworthy AI; UNICEF's Policy Guidance on AI and Children;

AI's impact on the world and those who live in it is not yet clear. By involving children in the conversation and listening to their views, we can ensure characteristics fundamental to humanity like empathy, love and playfulness are not left by the wayside as the next great technological shift accelerates.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

What is the 'perverse customer journey' and how can it tackle the misuse of generative AI?

Henry Ajder

July 19, 2024

About Us

Events

Media

Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum