Emerging Technologies

What can we expect of next-generation generative AI models?

Figurines with computers and smartphones are seen in front of the words "Artificial Intelligence AI"

Generative AI is evolving. Image: REUTERS/Dado Ruvic/Illustration

Andrea Willige
Senior Writer, Forum Agenda
  • A new generation of generative AI models will soon be released, including OpenAI's ChatGPT and Meta's Llama.
  • Developers are focused on optimizing and expanding their capabilities, including reducing bias and errors, enabling reasoning and planning, and addressing ethical challenges.
  • The World Economic Forum’s Presidio AI Framework aims to improve generative AI governance, establishing early guardrails as both AI engines and their applications evolve.

To say that generative AI has taken the world by storm would be understating the veritable avalanche set loose when OpenAI released ChatGPT in late 2022. Now it has announced the arrival of its latest upgrade, ChatGPT 5, and competitor Meta is following suit with an upgrade of its open-source Llama AI engine, the Financial Times reports.

Generative artificial intelligence (AI) revenue worldwide from 2020 with forecast until 2032
The generative AI market is expected to reach more than $1.3 trillion by 2032. Image: Statista

Generative AI continues to disrupt and transform

The coming together of AI methodologies, high-performance data processing and the “cloud” converged to make what had long been anticipated, by both science and science fiction, a reality. Needless to say it has been fundamentally transforming how we work and live – with no end in sight.

Statista expects the generative AI market to grow to $1.3 trillion by 2032, from only $14 billion in 2020. In 2023, it stood at $900 billion.

Have you read?

As the World Economic Forum points out in its Top 10 Emerging Technologies of 2023 report, the generative AI models that have dominated the headlines over the past year or so are mainly focused on text, programming, images and sound. However, their application could widen over time as the technology progresses.

ChatGPT, Llama, Google’s latest AI offering, Gemini, and Microsoft’s Copilot are all large language models (LLMs). These are algorithms that can analyze, summarize, predict and generate new content using deep learning techniques and large data sets. LLMs are typically trained using around one billion or more variables, though there is no consensus as to how much data is needed to train an LLM.

Loading...

LLMs get an upgrade

Despite the advantages associated with generative AI – simplifying, speeding and automating previously manual work – its weak points have also become apparent. Not only are there ethical and security concerns with AI’s application, but as TechTarget points out, there are issues such as bias – which may be hard to identify and remove – and hallucinations – when an AI engine provides a wrong answer.

An investigation by the Washington Post last year showed that an AI image generator defaulted to outdated Western stereotypes when asked to produce images of attractive people or houses around the world.

Continuous training and evolving LLM algorithms are key to addressing these issues and to advancing generative AI applications.

Alongside reducing bias and hallucinations, the developers’ ambition for both ChatGPT 5 and Llama 3 is to take the engines beyond simple chatbots. To widen the scope of applications to more complex tasks, it will be crucial to enable LLMs to reason, plan and retain information. They will also have to learn to gauge the effects of their actions, the Financial Times points out.

Another development area for both Llama and ChatGPT is multimodality, allowing AI to process not just text but speech, images, code and videos. Moreover, greater levels of personalization are expected to be part of the next-generation offering.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

Addressing ethical issues in generative AI

GPT-5 is understood to be undergoing rigorous testing and training, with a strong focus on safety protocols to address ethical concerns. However, in trying to eliminate bias and errors, generative AI companies can inadvertently achieve the opposite, as Google found to its detriment.

When originally launched, the image generation module of its Gemini platform was generating images of historically white groups like the US Founding Fathers or 1930s German soldiers as people of colour, or produced female hockey players when prompted for images of a well-known league that’s exclusively male. In trying to weed out bias, Google had unintentionally overcorrected its AI engine.

However, such fine-tuning will be a constant feature as generative AI engines continue to evolve. And as the use of AI expands and organizations are creating their own LLMs or adapting the major platform for their purposes, the need for solid governance frameworks remains a topical issue. For example, the World Economic Forum has proposed the Presidio AI Framework, which promotes safety, ethics and innovation with early guardrails to guide development.

Loading...
Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

3 strategies for using generative AI to responsibly extract data insights

Igor Jablokov and Cosima Piepenbrock

October 29, 2024

AI could empower and proliferate social engineering cyberattacks

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum