Your computer is using more power than you think. But what can you do about it?
We don’t always need 100 per cent accuracy to be satisfied with the outcome. Image: REUTERS/Gleb Garan
Your smartphone is far more powerful than the NASA computers that put Neil Armstrong and Buzz Aldrin on the moon in 1969, but it is also an energy hog. In computing, energy use is often considered a secondary problem to speed and storage, but with the rate and direction of technological advancement, it is becoming a growing environmental concern.
When the cryptocurrency mining company Hut 8 opened Canada’s largest bitcoin mining project outside Medicine Hat, Alta., environmentalists sounded the alarm. The plant consumes 10 times more electricity, largely produced by a natural gas-fired power plant, than any other facility in the city.
Globally, greenhouse gas (GHG) emissions from the information, communication and technology (ICT) sectors are forecast to reach the equivalent of 1.4 gigatonnes (billion metric tonnes) of carbon dioxide annually by 2020. That’s 2.7 per cent of global GHGs and roughly double Canada’s total annual greenhouse gas output.
By designing energy-efficient computer processors we could reduce energy consumption, and we could reduce GHG emissions in places where electricity comes from fossil fuels. As a computer engineer specialized in computer architecture and arithmetic, my colleagues and I are confident these positive effects can be achieved with almost no impact on computer performance or user convenience.
Powerful connections
The Internet of Things (IoT) — made up of the connected computing devices embedded into everyday objects — is already delivering positive economic and social impacts, transforming our societies, the environment and our food supply chains for the better.
These devices are monitoring and reducing air pollution, improving water conservation and feeding a hungry world. They’re also making our homes and businesses more efficient, controlling thermostats, lighting, water heaters, refrigerators and washing machines.
With the number of connected devices set to top 11 billion — not including computers and phones — in 2018, IoT will create big data requiring huge computations.
Making computation more energy efficient would save money and reduce energy use. It would also allow the batteries that provide power in computing systems to be smaller or run longer. In addition, calculations could run faster, so computing systems would generate less heat.
Approximate computing
Today’s computing systems are designed to deliver exact solutions at a high energy cost. But many error-resilient algorithms like image, sound and video processing, data mining, sensor data analysis and deep learning do not require exact answers.
This unnecessary accuracy and excessive energy expenditure is wasteful. There are limitations to human perception — we don’t always need 100 per cent accuracy to be satisfied with the outcome. For example, minor changes in the quality of images and videos often go unnoticed.
Computing systems can take advantage of these limitations to reduce energy use without having a negative impact on the user experience. “Approximate computing” is a computation technique that sometimes returns inaccurate results, making it useful for applications where an approximate result is sufficient.
At the University of Saskatchewan’s computer engineering lab, we are proposing to design and implement these approximate computing solutions, so that they can optimally trade off accuracy and efficiency across software and hardware. When we applied these solutions to a core computing component of the processor, we found that power consumption dropped by more than 50 per cent with almost no drop in performance.
Flexible precision
Nowadays, most personal computers contain a 64-bit standard numerical format. This means that they use a number with 64 digits (either zero or one) to perform all the computations.
3D graphics, virtual reality and augmented reality require the 64-bit format to work. But basic audio and image processing can be done with a 32-bit format and still provide satisfying results. Moreover, deep learning applications can even use 16-bit or 8-bit formatsdue to their error resilience
The shorter the numerical format, the less energy is used to perform the calculation. We can design flexible, yet precise, computing solutions that run different applications using the most appropriate numerical format so that it promotes energy efficiency.
For example, a deep learning application using this flexible computing solution could reduce energy consumption by 15 per cent, according to our preliminary experiment. In addition, the proposed solutions can be reconfigured to simultaneously perform multiple operations requiring low numerical precision and improve performance.
The IoT holds a great deal of promise, but we must also think about the costs of processing all of this data. With smarter, greener processors we could help address environmental concerns and slow or reduce their contributions to climate change.
Don't miss any update on this topic
Create a free account and access your personalized content collection with our latest publications and analyses.
License and Republishing
World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.
The views expressed in this article are those of the author alone and not the World Economic Forum.
Stay up to date:
The Digital Economy
The Agenda Weekly
A weekly update of the most important issues driving the global agenda
You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.