Approximate computing intentionally introduces controlled inaccuracy into computations to achieve significant improvements in hardware efficiency, such as reduced power, area, or latency. It leverages the inherent error tolerance of many applications, particularly in AI, to optimize resource usage.
Approximate computing is a clever way to make computer hardware faster and use less power by allowing it to be slightly less precise in its calculations. This works well for tasks like AI, where a tiny bit of inaccuracy doesn't spoil the overall result, leading to more efficient devices.
Approximate arithmetic, Inexact computing, Error-tolerant computing, Relaxed computing
Was this definition helpful?