DocumentCode :
3638095
Title :
Models for energy-efficient approximate computing
Author :
Ravi Nair
Author_Institution :
IBM Thomas J. Watson Research Center, Yorktown Heights, NY 10598
fYear :
2010
Firstpage :
359
Lastpage :
360
Abstract :
We are at the threshold of an explosion in new data, produced not only by large, powerful scientific and commercial computers, but also by the billions of low-power devices, from notebooks and mobile interface devices to cell phones and sensors of various kinds. The traditional techniques of processing such information by first storing them in databases and then manipulating and serving them through large computers are becoming too expensive. These complex large systems have a high acquisition cost, but, in addition, suffer also from high running costs, especially in power consumption. Both these costs can be contained by recognizing that there is a precision implied by traditional computing that is not needed in the processing of most new types of data. The relaxation of precision, such as the strict guarantees of memory coherence and consistency, can help in the wider exploitation of known energy-efficient modes of computing like throughput computing. More important, the relaxation of requirement of deterministic execution provides us an opportunity to deploy in the processing of this vast new data the same low-power, low-cost technology that was used to generate the data in the first place. Such energy-efficient circuits suffer from greater unreliability and variability in performance when used in the high-throughput mode, but these problems can be addressed by changing the way we design such systems, changing the nature of the algorithms for such systems, and by modifying the expectation of the quality of results produced by such systems. We have called this the approximate computing paradigm [1]. Solutions that provide non-exact results have long been used in computing. The precision of numbers represented in computers is limited and hence values of quantities represented in a computer are only an approximation of their actual values. Heuristic algorithms produce solutions to hard problems in much shorter time compared to optimal algorithms in exchange for results that may only approach in quality those produced by optimal algorithms. The difference is that these approaches, while approximate, produce the same results each time they are used on a piece of data, whereas in approximate computing we do not expect to produce the same results each time the algorithm is exercised on the same data. There are two sources of imperfection in approximate computing. The first arises from imperfect execution of an algorithm. This can be due to problems with the design of the algorithm or of the hardware, due to faults that occur after deployment of the hardware, due to the variability of operation of circuits when pushed to their design limits, or due to malicious attacks on systems. The second arises from imperfection in the data stream itself because of missing data or modified data, produced intentionally, as through data compression, or unintentionally, as through faulty communication channels. All these imperfections could potentially be rectified through the use of expensive techniques such as redundancy, conservative design, or conservative device operating range. The goal of approximate computing, however, is to combat these sources of imperfection inexpensively and in an energy-efficient manner while producing results that may be different, yet acceptable. Computing models that achieve this goal have to address both the detection and the correction of such imperfections. The detection of such imperfections can be done either by the user observing and reacting to a wrong result as in media applications, by the algorithm expecting a range of correct results as in the estimation technique of [2], or by the run-time monitoring of the execution of the system as in [3]. The correction of system behavior can be done either by attempting a different algorithm as in [2], by patching the code as in [3], or by repeating the execution. We will argue in this talk that future systems will need to combine all these techniques and integrate
Keywords :
"Approximation algorithms","Energy efficiency","Algorithm design and analysis","Approximation methods","Heuristic algorithms","Computers","Signal processing algorithms"
Publisher :
ieee
Conference_Titel :
Low-Power Electronics and Design (ISLPED), 2010 ACM/IEEE International Symposium on
Print_ISBN :
978-1-4244-8588-8
Type :
conf
DOI :
10.1145/1840845.1840921
Filename :
5599045
Link To Document :
بازگشت