Parallel computers are fundamentally more energy efficient than serial computers

Energy consumption and in turn heat dissipation is a core obstacle in the quest of engineering ever more faster computers. We show massive parallelization of computing hardware and software offers a way to further increase energy efficiency of computers by orders of magnitude.
Published in Physics
Like

Over the decades, the size of computer chips steadily decreased, while their speed further increased. This, in part, was accomplished by reducing the size of integrated circuits, which in turn reduced their heat dissipation while performing computations. This trend was formalised by Moore’s ‘law' [1]. However, there is consensus, that this trend will not hold [2] and new ways for improving computers have to be found. 

To be able to envision new paradigms, it is helpful to understand fundamental limits for the energy efficiency of computers. One such limit is given by the Landauer bound [3], which states that even for an optimally designed computer, there will always be an energy cost, or heat dissipation, associated to a logically irreversible computation, e.g. W = kTln2 for a logically irreversible bit operation, where k is the Boltzmann constant, and T the temperature of the processor.

What Landauer’s bound does not treat, is the fact that any computation would not just process multiple logical operations, but further would do so in a finite time. One would not be content with a paradigm that optimises computers, but at the cost of only being able to compute a given problem infinitely slowly.

In our work
'Fundamental energy cost of finite-time parallelizable computing’ 

we explored what the fundamental finite-time limits are on the energetic cost of computation. We show that, when the time per computation is limited, an additional energetic cost must be paid for each operation. This cost increases strongly, the faster the computation is to be performed. This is a problem if we try to increase the performance of a single-core serial computer: Such a computer performs one calculation after another, requiring an increased processor frequency for increased performance. The increased frequency leaves less time per calculation, which drastically increases the energetic cost per calculation and thus decreases efficiency. In contrast,  we can increase the performance of a parallel computer by adding more computing cores, without the need to increase its computing frequency. Therefore, the performance of a parallel computer can be increased indefinitely, without decreasing efficiency.

Our results agree with the increasing trend in the computing industry - that increasing performance is achieved by adding more processing cores rather than by substantial increase in processor frequency. We propose, that this trend is not only caused by simple engineering challenges but at least in part is governed by the fundamental limits of computation we described. Therefore, we propose that future computers will have to rely more and more on parallelization rather than an increased processor frequencies. 
Therefore, developing massively parallel computing algorithms will become more and more important. In our work we explore how much parallelization overhead a parallel algorithm run on a parallel computer can tolerate, while still being more energy efficient than a serial algorithm run on a serial computer.

It is important to point out, that the parallel paradigm discussed here, is not just parallel in the sense of 10-1000 cores. But parallel in the sense that the core number scales linearly with the size of the problem. The divergence of the energy cost per computation is traded for the divergence in needed cores. Yet, for sufficient overheads, the parallel computer will still scale energetically more favourable

[1] Moore, G. E. Cramming more components onto integrated circuits. Electronics 38, 114-117 (1965).
[2] Theis, T. N. & Wong, H. S. P. The End of Moore's Law: A New Beginning for Information Technology. Computing in Science Engineering 19, 41-50 (2017).
[3] Landauer, R. Irreversibility and Heat Generation in the Computing Process. IBM Journal of Research and Development 5, 183-191 (1961).

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in

Subscribe to the Topic

Physics and Astronomy
Physical Sciences > Physics and Astronomy

Related Collections

With collections, you can get published faster and increase your visibility.

Applied Sciences

This collection highlights research and commentary in applied science. The range of topics is large, spanning all scientific disciplines, with the unifying factor being the goal to turn scientific knowledge into positive benefits for society.

Publishing Model: Open Access

Deadline: Ongoing