Opinion
Emerging Technologies

The RenAIssance: Why AI marks a resurgence of empiricism

man holds tablet computer superimposed with graphics of different uses of AI

This dystopian portrayal of AI owes more to sensationalism than scientific substance. Image: Getty Images/iStockphoto

Eric Xing
President, Mohamed bin Zayed University of Artificial Intelligence
This article is part of: AI Governance Summit
  • Artificial intelligence (AI) is already embedded into everyday life, but the chatbot revolution has been accompanied by warnings.
  • The dystopian portrayal of AI as an 'existential threat' owes more to sensationalism than rational and scientific substance.
  • In reality, AI is ushering in a 21st-century 'RenAIssance' that will take the global community into an Age of Empowerment.

Without the fanfare of chatbots and image generators, artificial intelligence (AI) has already quietly embedded itself in everyday life. It recognizes your face to open a phone, translates foreign texts while you travel, navigates you through traffic and even selects what movies you want to watch.

But the chatbot revolution has been accompanied by ominous warnings comparing AI’s growing utility to “existential threats” like nuclear Armageddon or natural disasters. Internet influencers have invoked the spectre of “God-like AI” and abstract, and sometimes absurd, claims.

Such claims have been amplified by some big names in academia and business who have lent their authority to the doomer outcry, fuelling public fear and anxiety, rather than embracing the rational analysis and evidence that an educated society deserves.

The voice of real researchers and innovators at the cutting edge of today’s science risks being unheard or drowned out.

The winners from the regulatory rush will likely be Big Tech. The losers? The startup and open-source community, which strives to bring transparent, open and responsible technology to society.

AI threat ‘exaggerated’ and ‘dystopian’

A closer look at actual existential threats lays bare the exaggerations surrounding AI's alleged threat. The melting glaciers of climate change, the indelible scars of nuclear warfare at Hiroshima and Nagasaki, and the ravages of pandemics like COVID-19, are stark reminders of real and present danger.

This dystopian portrayal of AI owes more to sensationalism than scientific substance. Unlike the immediate threat of nuclear weaponry or climate change, AI's purported threat is akin to science fiction. HAL-9000, Skynet, Ultron are all familiar film villains, supposedly artificial intelligences who turn on their creators.

The reality of AI is very different to the practical problems we try to solve as research scientists. The term ‘AI’ itself covers a vast array of scientific domains, technological innovations, artefacts and human engagements. It’s laden with misinterpretations and misuse in discussions veering off towards existential threats.

Misleading predictions of future threats are based on scientifically unsound extrapolations from a few years’ growth curve of AI models. No technological growth curve ticks up indefinitely. Growth is bounded by physical laws, energy constraints and paradigm limitations, as seen with transistor density in semi-conductor chips and flops in super computers.

There is no evidence that the current software, hardware and mathematics will propel us to artificial general intelligence (AGI) and beyond without major paradigm disruptions. The risks of transformer-enabled AI – that drives ChatGPT – pale in comparison to the potential of CRISPR gene editing.

There are fundamental holes in the AI doomers’ reasoning and conclusions — evidenced by the large jumps in establishing and justifying their theory.

Imagine someone invented a bicycle and was quickly able to pedal it to higher and higher speeds within a short amount of time, progressing through exercise and training. With an electric motor and lighter materials, the bike goes faster still. Would we believe that the bike could be ridden until it flies?

It is not difficult to find the absurdity of such a reasoning.

But this is exactly the current AI doomers’ narrative: AI becomes encyclopaedic through general pre-trained transformers, like ChatGPT. Next AI leaps to become AGI. Then it becomes an artificial superintelligence (ASI), with emotional intelligence, consciousness and self-reproduction.

And then, another big jump – it turns against humans and without deterrence can extinguish humanity (using “sci-fi" methods like causing vegetation to emit poisonous gas or figuring out a way to deplete the energy of the sun, according to some recent scenarios presented at an Oxford Union debate).

Each of these “jumps” would require ground-breaking advances in the science and technology, which are likely impossible, and many of the assumptions made are logically unjustified. But these stories risk capturing the public imagination.

AI doomers – whether intentionally or not – are ignoring the obligation of scientific proof and panicking publics and governments as recently seen at Bletchley. The regulation being pushed is not intended to prevent ludicrous ‘existential risks’.

It is designed to undermine the open-source AI community which poses a threat to the profits of Big Tech, or over-regulation to beef up the cost of AI development to benefit a select few.

Have you read?

Ironically, the ‘existential threat’ ignores human agency. It was not technology but basic human management systems that lay behind disasters like Chernobyl and the Challenger explosion.

And contrary to the physical sciences, AI’s realm is predominantly digital. Any AI interaction requires many more steps of human agency, and opportunities for check points and controls than any technology that experiments directly with the physical world, such as physics, chemistry and biology.

AI doomerism rhetoric hides the fundamental and transcendental benefits to society and civilization that come with scientific technological advances. It does little to inspire and incentivize the public to understand and leverage science.

History is full of examples where technology has served as a catalyst for human advancement, rather than a harbinger of doom. Tools like the compass, books and computers have taken on us real and intellectual voyages alike.

The existential threat narrative hinges on AI ‘transcending’ human intelligence, a notion bereft of any clear metrics. Many inventions – like microscopes and calculators – already surpass human capabilities, yet they have been greeted by excitement, not fears of extinction.

AI – in reality ­– is ushering in a 21th century ‘RenAIssance’. Unlike the original Renaissance, which lead to the Age of Enlightenment and was defined by a rational, foundational approach to science, this era is taking us to an Age of Empowerment.

Loading...

The historical Renaissance was enabled by the printing press, allowing the rapid diffusion of knowledge through Europe and beyond. Early science gave this knowledge structure through “knowing how to think”, and figures like Isaac Newton and Gottfried Wilhelm Leibniz championed and defined this rationalism, setting the stage for a methodical science rooted in first principles.

For centuries, the science they created moved forward by forming hypotheses, unravelling core ideas, and validating theories through logic and methodical experimentation. Modern AI is now reshaping this classical problem-solving approach.

Data and algorithms herald a new age of discovery

Today the amalgamation of vast datasets, advanced infrastructure, complex algorithms and computational power heralds a new age of discovery that goes far beyond traditional human logic, characterized by radical empiricism and AI-guided insights.

Today’s AI RenAIssance goes beyond the ‘how’ to delve into the ‘why’ It arms individuals with both knowledge and the tools for real-world problem-solving – marking a shift towards a practical approach. AI unveils a spectrum of possibilities in fields like biology, genomics, climate science and autonomous technology.

The hallmark of this era is the resurgence of empiricism, fuelled by AI’s data processing prowess, enabling automated knowledge distillation, organization, reasoning and hypothesis testing, and offering insights from identified patterns.

It opens the way for alternative methodologies of scientific exploration, for example, through extremely high throughput digital content generation, extremely complex simulative prediction, and extremely large-scale strategic optimization, at a magnitude and speed massively exceeding what traditional first-principle based methods and causal reasoning would be able to handle.

Discover

How is the World Economic Forum creating guardrails for Artificial Intelligence?

This means unprecedented real opportunities for humans to tackle challenges such as climate, cancer and personalized medicine.

The modern Renaissance fosters continuous learning and adaptation, moving society from an insistence on understanding everything prior to acting, towards a culture of exploration, understanding and ethical application. This mindset resonates with past empirical methodologies, advocating a humble approach to gaining knowledge and solving problems.

Like Prometheus stealing fire for humanity, AI has emerged as a potent yet not fully grasped tool to propel our civilization forward. We need the humility, the courage – and the freedom – to take this tool and use it.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Stay up to date:

Generative Artificial Intelligence

Share:
The Big Picture
Explore and monitor how Artificial Intelligence is affecting economies, industries and global issues
World Economic Forum logo

Forum Stories newsletter

Bringing you weekly curated insights and analysis on the global issues that matter.

Subscribe today

Here’s why it’s important to build long-term cryptographic resilience

Michele Mosca and Donna Dodson

December 20, 2024

How digital platforms and AI are empowering individual investors

About us

Engage with us

  • Sign in
  • Partner with us
  • Become a member
  • Sign up for our press releases
  • Subscribe to our newsletters
  • Contact us

Quick links

Language editions

Privacy Policy & Terms of Service

Sitemap

© 2024 World Economic Forum