Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques
Abstract: The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, over the last 15 years, the semiconductor industry has established power efficiency as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, which has resulted in novel approximation techniques for all the layers of the traditional computing stack. More specifically, during the last decade, a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories) have been proposed in the literature. The current article is Part I of a comprehensive survey on Approximate Computing. It reviews its motivation, terminology and principles, as well it classifies the state-of-the-art software & hardware approximation techniques, presents their technical details, and reports a comparative quantitative analysis.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Collections
Sign up for free to add this paper to one or more collections.