Why ethical AI requires a future-ready and inclusive education system
Ethical AI not only includes understanding the social implications of AI and harnessing the fair use of data, but also entails education and training opportunities that are accessible, fair, and diverse. Image: Unsplash/Giovanni Gagliardi
Danni Yu
Project Fellow, Artificial Intelligence and Machine Learning, World Economic Forum, Consultant, Boston Consulting GroupListen to the article
- Research shows that marginalised groups are particularly susceptible to job loss or displacement due to automation.
- We must address the fundamental issue of AI education and upskilling for these underrepresented groups.
- Public and private sector stakeholders can collaborate to ensure an equitable future for all.
The recent development of generative artificial intelligence (AI) has led to a surge in the adoption of AI across a wide range of industries, from healthcare to marketing. But with it, there is ample evidence of discriminatory harm that AI tools can cause to already marginalised groups through unconscious biases in algorithms and lack of representation in data sets. AI also has varied potential impacts on jobs and labour, with marginalised groups being more susceptible to job loss or displacement due to automation.
The World Economic Forum’s Future of Jobs Report 2023 indicates that 44% of workers’ core skills are expected to change in the next five years alongside increased tasks completed by machines. With generative AI, recent research done by OpenAI concluded that approximately 80% of the US workforce would have some share of work affected by GPTs (Generative Pre-trained Transformer - a language model capable of producing human-like text) and about 19% of workers will have a significant share of their work affected.
Research also indicates that the impact of AI on jobs and labour is more likely to have negative consequences for women, racialised, indigenous, and low-income groups. Eric Brynjolfsson, director of the Stanford Digital Economy Lab, noted that while automation by AI can increase productivity and wealth, the benefits disproportionately go to those with resources that are not easily replaced with technology – i.e. unique assets, talents or skills. A spiral of growing marginalisation can occur in the already disadvantaged communities due to their low literacy on AI.
Ethical AI also means future-ready and inclusive AI education and training
There are several factors at the confluence of well-being and marginality, of which education and knowledge are key prerequisites for improved access to employment and business opportunities.
With the rapidly growing number of tasks being automated and augmented by AI, employment and business opportunities inevitably evolve and make the ability to use AI technologies a vital qualification. However, the current education and training system is inadequate to prepare everyone for this transformational journey.
Groups with fewer resources are likely to be directly or indirectly excluded and marginalised. For example, children in marginalised communities have lower access to advanced technologies and AI education opportunities. When their more affluent peers have iPads and laptops, start coding games and learn about AI basics, they usually do not have this degree of exposure to the latest technologies and curricula.
Therefore, the ethical use of AI not only includes understanding the social implications of AI and harnessing the fair use of data, but also entails education and training opportunities that are accessible, fair, and diverse. In tandem with rapid advances in AI, education systems must keep pace with this transformation to ensure future-ready and inclusive curricula.
The ethical use of AI not only includes understanding the social implications of AI and harnessing the fair use of data, but also entails education and training opportunities that are accessible, fair, and diverse.
”Publicly accessible education and training must be on the national AI agenda
While national AI strategies have a section on reskilling and preparing the workforce, they must emphasise public education systems. Improving public AI literacy should be highlighted, with funding dedicated to uplifting the public education system and upskilling public school teachers.
Education authorities should work with schools, teachers and communities to create practical guidelines and concrete plans to make AI basics and digital literacy part of the core curricula. It is also beneficial to introduce the basics of AI, digital literacy, critical thinking and innovation to students early in their education, such as during their secondary or even primary school education.
A few examples of how AI education can be integrated into educational journeys:
- Egypt is rolling out a pilot programme to high schools around the country to teach students about AI, with the premise that exposing students to the basics of AI through formal education and training will serve as a foundation for a future-ready workforce.
- The National Science Foundation has a programme to engage rural students in AI to develop pathways for innovative computing careers.
- Germany's apprenticeship model entails technical and theoretical training in tandem with nationally recognized skills standards. Hands-on training is offered at multiple points in a child’s educational journey.
- Flexible learning pathways such as Technical and Vocational Education and Training (TVET) can be incorporated as an important alternative to formal education on AI.
How is the World Economic Forum ensuring the responsible use of technology?
Importance of inclusive educational experiences
To achieve equitable and inclusive AI, it is essential for all those interacting with this technology to understand and collaborate effectively with this and be aware of the opportunities and risks posed by its use. No matter which educational avenue learners go through, an inclusive and tailored learning experience is critical to improving knowledge uptake and preventing dropouts. This includes considerations in the context of AI education and the delivery of the content. For example:
- Experiences should be designed to warrant accessibility in training for those who face barriers to formal education, such as people with disabilities, people of low literacy and older adults.
- Governments and institutions should recognise the resource barriers to AI education among disadvantaged groups, such as computers and access to the internet, childcare support and financial support.
- It is important to understand differences in the learners’ social and cultural backgrounds, include diverse perspectives in the content, and use examples and case studies to which they can relate. One way to address this is by using examples, case studies and ideas that reflect a range of experiences and perspectives on gender, race, culture, ethnicity, sexual orientation, religion and age, among others.
While we hope for a wealthier and more productive future with AI, there is still a long way before everyone is equipped with sufficient knowledge and skills to reap its benefits. As AI develops, it's time to act on inclusive and future-ready education systems that pave the way for a more equitable future.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Tech and Innovation
Related topics:
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.
More on Emerging TechnologiesSee all
Filipe Beato and Jamie Saunders
November 21, 2024