A new era of computing is coming. How can we make sure it is sustainable?
Sustaining the growth predicted by Moore's Law has required vast spending, investment and collaboration Image: REUTERS/Mike Blake
Kirk Bresniker
Chief Architect, Hewlett Packard Labs; HPE Fellow; Vice-President, Hewlett Packard EnterpriseEighty years ago, Alan Turing laid down the mathematical basis of computation. Just a decade later, John von Neumann made computing practical. Advances in information technology since then have been fundamental to global economic growth, fuelled by a collection of critical core technologies for which we have been able to engineer exponentially increasing performance at exponentially decreasing costs.
This remarkable journey is described by Moore’s Law, Intel co-founder Gordon Moore’s 1971 prediction that the number of transistors we can pack into a microchip would double every 18-24 months. But Moore himself observed that “the nature of exponentials is that you push them out and eventually disaster happens”.
We now find ourselves, after five decades of incredible growth, at the twilight of Moore’s Law. We need to reinvent our core technologies so we can sustainably and equitably deliver innovation to a computer-hungry world.
Every two years, we create more data than we've created in all of history. Our ambitions are growing faster than our computers can improve.
The cost reductions and performance improvements predicted by Moore’s Law have had a profound democratizing effect on the consumers of technology. However, it has taken a concerted global collaboration coordinating the efforts of academia, government and industry to sustain this engine of global growth.
It has also taken exponentially increasing capital investment and operational cost to build the factories that make the chips, a trend known as Rock’s Law. In line with this prediction, next-generation factories are estimated to cost in excess of $20 billion.
Less well known is an important observation which we have not been able to maintain: Dennard’s Law, the observation made by IBM Fellow Robert Dennard at approximately the same time as Moore, says that as the transistors shrink (or “scale” down) they would get faster and use less power while being cheaper.
The Dennard Scaling Law ended approximately 10 years ago. You can have more transistors on a chip, but they won’t run faster or use less power. In order to continue to improve speed and reduce power used, higher levels of integration are required, meaning functions that used to be spread across multiple chips are now integrated on a single, increasingly massive, chip.
Moore plus Rock minus Dennard yields a recipe for consolidation for a handful of architectures and manufacturers. Innovation outside of this shrinking pool of market leaders is limited to peripheral devices, operating thousands of times more slowly than the highly integrated devices.
Consolidation in both suppliers and architectures, in turn, creates uniformity in software design and just as a lack of biodiversity leaves an ecosystem vulnerable to disease, our IT systems become vulnerable to cyberattacks. To make matters worse, technologies conceived in a naïve and disconnected era where their flaws could be contained, are now used in critical social infrastructure where failure can spread like a disease.
It would have been economically foolish not to utilize the advantages of Moore’s Law while they were available to us, but that will not remain the case for long. The same global community which has facilitated the scaling of Moore’s Law for decades is rapidly reorganizing itself, having reached the consensus that the relatively luxurious era of being able to focus nearly exclusively on the transistor is closing.
While the uncertainty of a world entering the twilight of Moore’s Law might seem daunting, it offers something which we have not experienced in more than four decades: the opportunity to defy conventions.
As we move beyond the fantastically productive Moore’s Law era, we find ourselves on the precipice of a world of multiple, simultaneous inversions of conventional wisdom:
- Training neural networks will replace programming applications;
- Data will be distributed, not hoarded in server farms; Data will be distributed, not hoarded in server farms;
- Memory will be abundant and cheap, not scarce and expensive;
- Special-purpose computation elements will replace generic, general-purpose microprocessors;
- Information technology will create value, not just reduce cost;
- Photons will replace electrons as information carriers;
- The edge and core will be a continuum, not a dichotomy.
“To manage data you need a lot of power; to extract insights into action you need a lot of computational power. Current architectures are just simply not sustainable.” – Antonio Neri, President and CEO, Hewlett Packard Enterprise. World Economic Forum Annual Meeting in Davos, January 2018
We find ourselves on the precipice of simultaneous inversions of conventional wisdom. In an echo of the tumultuous transition from classical to modern physics, where our science became more powerful yet less intuitive, the sum of these inversions has the potential to be vastly more rewarding. We should be embracing the end of the Moore’s Law era and the prospect of new, diverse and vibrant opportunities.
How do we find our way through this inverted world? By shifting our focus from the supply side of the equation to the demand side. While the exponential performance growth of conventional technology is slowing; exponential demand continues unabated. There is an exponential increase of new data and an exponential decrease in the time we can afford to go from sensing to acting.
The amount of information we record as a species doubles every two years – in other words, every other year we create as much new information as has ever been recorded. The vast majority of this information is recorded in traditional data centres today, but as data continues to double that will change dramatically. The percentage of enterprise data that will never be in a data centre may rise from 10% today to more than 70% within five years. Seven out of 10 bytes could be captured, analysed and acted upon without leaving the “intelligent edge”.
We live in an impatient world. Consider the rise of “intelligent” social infrastructure: smart power grids, 5G wireless communication, autonomous vehicle fleets. All of these systems have information lifecycles which are dropping from seconds to microseconds.
Take the product of these two trends and we see the model of the future emerge: the vast majority of information will be analysed by artificial intelligence and data analytics algorithms to inform actions taken on our behalf. Nearly all of this will happen in a distributed information infrastructure.
The majority of growth will occur at the intelligent edge with a rich set of novel computational “acceleration” technologies: memory-driven, neuromorphic, photonic, quantum and analog computing will all become indispensable. Established at the edge, these techniques will flow back towards the core data centre, establishing a continuum from edge to core.
Using these trends as our North Star, we can describe the hyper-competitive public and private enterprises that will be possible. The hyper-competitive digital enterprise:
- Understands real-time data is the new source of competitiveness and economic value creation, of equal or greater value than the underlying commodity or process that they deliver;
- Enables every physical or digital product, every manufacturing process in the factory, every business process in the enterprise to produce data and leverage that data to become ever more efficient and resilient;
- Pushes analytic and machine learning capability as close as possible to the edge for real-time insight and action;
- Forges a continuum from the intelligent edge to the enterprise core so that every action can complete in the minimum time using the minimum energy;
- Relentlessly and sustainably turns raw data to economic advantage via process improvement, investment strategy, customer satisfaction, warranty reduction, and direct monetization.
Distributed systems of novel computation elements are certainly more complex, but in return, they are more equitable, secure, available and sustainable. Those factors make them arguably more just.
The economic opportunities which emerge are the antidote to today’s technical monoculture. As summarized by Hewlett Packard Labs Senior Fellow Stan Williams in his closing address to last year’s IEEE International Conference on Rebooting Computing: “The end of Moore’s Law is the best thing to happen to computing since the start of Moore’s Law. The golden handcuffs are off!”
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
Technological Transformation
Forum Stories newsletter
Bringing you weekly curated insights and analysis on the global issues that matter.
More on Emerging TechnologiesSee all
Michele Mosca and Donna Dodson
December 20, 2024