How will generative AI affect children? The need for answers has never been more urgent
Children use generative AI daily, from helping with homework tasks or making decisions about their wardrobe. Image: Unsplash/Alexander Grey
Listen to the article
- Children are using generative AI but are often hiding it from teachers and parents.
- There are plenty of benefits to generative AI but there are also many unanswered questions and risks for children.
- Various stakeholders must act to understand the potential impact generative AI will have on children as they grow up with its increasing ubiquity.
Generative artificial intelligence (AI), such as that powering ChatGPT, has catapulted AI’s algorithms and data crunching from behind-the-scenes to front-and-centre of our digital experiences. Now, children use it daily, from helping with homework tasks or making decisions about their wardrobe.
This rapid adoption has fanned debate about AI’s broader implications. And while generative AI carries benefits, there are perennial AI challenges, such as algorithmic biases and system opacity. It may even amplify issues like unpredictable outputs or introduce new ones.
Since children and young people are the largest demographic cohort spending time online and given the pace of generative AI development and uptake, it is crucial to understand generative AI’s impacts on children.
What is generative AI and how is it used?
Generative AI, a machine-learning subset of AI, learns from vast amounts of data to discover patterns and generate new, similar data. It’s often used to produce content mimicking human output – be it text, images or even computer code – but it can also complete complex planning tasks, support the development of new medicines and boost the way robots perform in unprecedented ways.
We don’t know how many children use generative AI but initial surveys suggest it is more than adults. One small poll in the United States revealed that while only 30% of parents had used ChatGPT, 58% of their 12-18-year-old children had done so and hid it from parents and teachers. In another US survey, young adults who knew about ChatGPT reported using it more than their older counterparts.
AI is already part of children’s lives in the form of recommendation algorithms or automated decision-making systems and the industry embrace of generative AI indicates that it could quickly become a key feature of children’s digital environment. It is embedded in various ways, including via digital and personal assistants and search engine helpers.
Platforms popular with children, like Snapchat, have already integrated AI chatbots. At the same time, Meta plans to add AI agents into its product range used by over 3 billion people daily, including Instagram and WhatsApp.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
How generative AI benefits children
Generative AI brings potential opportunities, such as homework assistance, easy-to-understand explanations of difficult concepts, and personalized learning experiences that can adapt to a child’s learning style and speed. Children can use AI to create art, compose music and write stories and software (with no or low coding skills), fostering creativity.
Children with disabilities can interface and co-create with digital systems in new ways through text, speech or images. As children use the systems directly, generative AI could help detect health and developmental issues early. Indirectly, generative AI systems can provide insights into medical data to support advances in healthcare.
More broadly, the analysis and generative capabilities can be applied in various sectors to improve efficiencies and develop innovative solutions that positively impact children.
Risks of generative AI for children
But generative AI could also be used by bad actors or inadvertently cause harm or society-wide disruptions at the cost of children’s prospects and well-being.
Generative AI has been shown to instantly create text-based disinformation indistinguishable from, and more persuasive than, human-generated content. AI-generated images are impossible to tell apart from – and, in some cases, perceived as more trustworthy than – real faces (see Figure 1). These abilities could increase the scale and lower the cost of influence operations. Children are particularly vulnerable to the risks of mis/disinformation as their cognitive capacities are still developing.
Longer-term usage raises questions for children. For instance, given the human-like tone of chatbots, how can interaction with these systems impact children’s development? Early studies indicate children’s perceptions and attributions of intelligence, cognitive development and social behaviour may be influenced.
Also, given the inherent biases in many AI systems, how might these shape a child’s worldview? Experts warn that chatbots claiming to be safe for children may need more rigorous testing. And then, as children interact with generative AI systems and share their personal data in conversation and interactions, what does this mean for children’s privacy and data protection? Australia's eSafety Commissioner believes that in this context there needs to be greater consideration of the collection, use and storage of children’s data, particularly for commercial purposes.
Wide-ranging impacts
Some potential outcomes offer both opportunities and risks. We are uncertain, for example, how generative AI will disrupt children’s future work life. It could replace key jobs while introducing new ones. That is relevant to how and what education is provided to children today.
While the opportunities and risks reach much further, these examples illustrate the wide-ranging implications of AI. As children will engage with AI systems throughout their lives, the interactions during their formative years could have lasting consequences, underscoring the need for a forward-thinking approach from policymakers, regulatory bodies, AI developers and other stakeholders.
The need to act
As a starting point, existing AI resources provide much direction for responsible AI today. For example, UNICEF’s Policy Guidance on AI for Children has nine requirements to uphold children’s rights in AI policies and practices and the World Economic Forum’s AI for Children toolkit provides advice to tech companies and parents. But advances in generative AI mean existing policies must be interpreted in novel contexts, and new guidance and regulations may need to be developed.
Policymakers, tech companies and others working to protect children and future generations need to act urgently. They should support research on the impacts of generative AI and engage in foresight – including with children – for better anticipatory governance responses. There needs to be greater transparency, responsible development from generative AI providers and advocacy for children’s rights. Global-level efforts to regulate AI, as called for by UN Secretary-General António Guterres, will need the full support of all governments.
Read more about generative AI and children in this UNICEF brief.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Tech and Innovation
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.