Opinion
Emerging Technologies

Generative AI won't kill education — unless we allow it to

Generative AI is not necessarily a death knell for education — but educators must be taught how to engage with it effectively.

Generative AI is not necessarily a death knell for education — but educators must be taught how to engage with it effectively. Image: Getty Images

Conrad Hughes
Director General, International School of Geneva
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Emerging Technologies?
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Generative Artificial Intelligence

Listen to the article

  • More than 90% of teachers said they had never had any training in or even advice on how to use generative AI in school.
  • Just like any other tool, generative AI has potential to disrupt or augment how we think and act.
  • To ensure it is a net benefit to society and the education sector, we must stop being afraid of this technology and instead think hard about how to use it best.

Generative artificial intelligence (AI), embodied in programmes like ChatGPT and Google’s Bard, is a major force that sectors from manufacturing to healthcare must respond to — but the sector perhaps most at risk of disruption is education.

Unlike other technological game changers, which have accompanied human cognition over time — the telegraph, the telephone, and basic computing power — AI does not just accompany us; it could modify present and future generations’ cognition. This is because human socialization and organic, human pedagogies are being replaced with artificial tutors and automated feedback loops that could change the development of our neurocircuitry.

Have you read?

Generative AI is changing how we learn

Technophobia has always been fairly prevalent in education circles. Unsurprising, given that the overarching sociological structure of education is the transmission of knowledge and culture and is, therefore, de facto, traditionalist. Fundamentally, however, young people know that information can be found online.

Students today risk being criticized for using their phones to look up answers online on Wikipedia, not because the answer is wrong, but because they used the internet to find it. Trying to stop them from accessing it is problematic, to say the least.

Many have warned that generative AI could harm students’ learning, providing them too easy a way to complete work, get answers or write. But we live in a world where architects, designers and engineers use the best technology available to them every day. We must examine whether it really is sensible to prevent students from engaging in the same behaviour in their own world.

Views on Generative AI are diverging

The reaction to the rise of generative AI is sporadic: on one end of the spectrum, there is fear of the so-called existential risk from artificial intelligence. The idea is that superintelligent systems, especially since they are generative, might totally supersede human intelligence and effectively take over human beings’ place as the dominant species. Indeed, the May 2023 Statement on AI risk of extinction, signed by 100 AI specialists, stated that mitigating the risks of AI should be a global priority.

Recent studies commissioned by UNESCO, such as the Global Education Monitoring Report and An Ed-Tech Tragedy? exhort governments, ministries and education providers to err on the side of caution, informed and researched implementation and human checks and balances. The primary concerns are the protection of children’s data, extreme surveillance power, the realization that AI is exacerbating a digital divide and worries about Big Tech companies wading into the education market unbridled and unchecked. The fundamental fear is that AI is threatening human civilization.

Generative AI: an opportunity, if done right

Not everybody shares this catastrophism. On the other extreme of the spectrum are those who see AI as an opportunity to increase access to learning, improve the quality of feedback, assist students with learning needs and automate low-level cognitive routine tasks that are not particularly efficient or necessary for learning anyway.

It was the COVID-19 pandemic that catapulted many learning systems online, and while many lost out because of it, there were some startling gains. Rather than shutting down schools altogether and leaving students with no instruction whatsoever, the lifeline of learning continued through technology. It was precisely those countries with broadband connection that suffered the fewest learning gaps.

Furthermore, the World Bank has shown how countries with wider internet access and safer internet servers tend to be more resilient to epidemics such as COVID-19. The Global Education Monitoring Report points out that technology can create gains, especially for disabled students. More specifically, related to generative AI, some countries, such as Saudi Arabia and China, are investing heavily in AI for education. Take the Chinese platform Squirrel AI learning, a powerful large language model platform that is used with hundreds of thousands of students. Chinese companies such as Alibaba, Baidu and JD are working on their own generative artificial intelligence systems that they claim are more powerful and effective than ChatGPT.

Concerning the broader, dramatic philosophical questions about generative AI and humanity, the cat is out of the bag. Digital natives live with technology and use it seamlessly and effortlessly in the way they operate and communicate.

They do not live in a state of degenerate cognitive decline, and nor is generative AI coming to destroy human civilization. In fact, human civilization faces far greater challenges than generative AI: war, discrimination and the destruction of our planet, to name a few. We must teach our youth to think through those problems with technology as an accelerator and enabler, instead of panicking about something that is a part of reality. The truth is, just like satellites, GPS, PET scans, and more, generative AI is a tool in our belt — how we use it is up to us.

However, there must be a clear position on how to use generative AI in the classroom. More than 90% of teachers said they had never had any training in or even advice on how to use generative AI in school, let alone whether they should use it or not. A healthy and simple next step is to read UNESCO’s Guidance for Generative AI in education and research, and take its recommendations seriously. Human civilization might not depend on it — but making the most of this new technology does.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Why trust and safety discussions are key to AI safety

Agustina Callegari

August 12, 2024

About Us

Events

Media

Partners & Members

  • Sign in
  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum