Everyone wants this pricey chip for their AI. But what is it?
The Nvidia H100 has cornered the market with its impressive processing capabilities. Image: Unsplash/vishnumaiea
Listen to the article
- Training generative artificial intelligence (AI) consumes huge amounts of data.
- One chip has cornered the market with its impressive processing capabilities.
- But developing AI responsibly is still a concern, as a World Economic Forum summit underlined.
It’s been hailed as the microchip that’s powering the artificial intelligence (AI) boom. At $40,000, it’s not cheap, but this chip is already being replaced by a more powerful version of itself. Introducing the Nvidia H100.
Creating generative AI applications is a costly business, both in cash terms and in the amount of computing power needed to train AI. In fact, it’s so demanding that only a handful of chips are able to cope.
From Moon landings to generative AI
Of course, microchips are all around us. The laptop on which I’m writing this and the device you’re reading it on couldn’t function without them. And, as our expectations of what our devices can do expands, they’ve been getting more powerful.
Way back in 1965, Gordon Moore, one of the founders of tech giant Intel, predicted that the number of transistors in a microchip – and hence its processing power – would double every year. It became known as Moore’s Law and experts say it still holds true today.
You don’t need to be a computer expert to appreciate the change in processing power – the numbers speak for themselves. The computer that navigated the Apollo space missions to the moon had roughly 78 bytes of memory. The H100 has 80 gigabytes – that’s 80 billion bytes.
AI demands ever faster processors
Remarkable though that is, H100 is also part of another revolution in computer power. H100 is a GPU – a graphics processing unit – which, as the name suggests, is a type of microprocessor originally developed to deliver on-screen graphics for games and videos.
In fact, Nvidia created the first GPU back in 1999. Since then, their superior processing power has made GPUs the default choice for the massive amounts of data handling needed for AI.
Bloomberg estimates that Nvidia controls about 80% of the market for the accelerators in AI data centres operated by Amazon, Google Cloud and Microsoft. The other key players are Advanced Micro Devices Inc (AMD) and Intel, according to Bllomberg’s analysis.
As mentioned at the start, H100 is about to be superseded by the even more powerful GH200 Grace Hopper superchip which has 282 gigabytes of memory and is named for US programming pioneer Grace Murray Hopper.
“To meet surging demand for generative AI, data centres require accelerated computing platforms with specialized needs,” said Jensen Huang, founder and CEO of Nvidia at the launch of the GH200.
Nvidia says the Grace Hopper has been “created to handle the world’s most complex generative AI workloads, spanning large language models… and vector databases”. And training AI really does need an awful lot of data.
“Smart AI algorithms in all their glory, but without a steady supply of large amounts of reliable data, they are useless,” said Rune Mejer Rasmussen of IBM in a blog. “These algorithms require access to enormous amounts of data. It could be up to petabyte levels.” A petabyte is 1,000,000,000,000,000 bytes.
Bloomberg reported Nvidia as stating that the H100 is four times faster than its predecessor, the A100, at training what are known as large language models and 30 times faster at responding to prompts from users, delivering a critical performance advantage for AI developers.
Developing responsible AI
But the rapid pace of development has left many worried that regulation is not keeping up. The World Economic Forum has launched the AI Governance Alliance, bringing together industry leaders, governments, academic institutions and civil society organizations, to champion responsible global design and release of transparent and inclusive AI systems.
How is the World Economic Forum creating guardrails for Artificial Intelligence?
In April 2023, the Forum convened “Responsible AI Leadership: A Global Summit on Generative AI” in San Francisco, which was attended by over 100 technical experts and policymakers. Together they agreed 30 recommendations for ensuring AI is developed responsibly, openly and in the service of social progress.
Writing in Agenda, Cathy Li, Head of AI, Data and Metaverse at the Forum and Jeremy Jurgens, a Forum Managing Director, said: “Generative AI … is proving to be a transformative force on our economies and societies.
“Its rapid development in recent months underscores AI's vital role as the foundational infrastructure upon which the digital landscape will evolve – and reinforces the need for ensuring its responsible, ethical use.”
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Generative Artificial Intelligence
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024