Dynamic Precision Analog Computing for Neural Networks

Dynamic Precision Analog Computing for Neural Networks

Abstract:

Analog electronic and optical computing exhibit tremendous advantages over digital computing for accelerating deep learning when operations are executed at low precision. Although digital architectures support programmable precision to increase efficiency, analog computing architectures today only support a single, static precision. In this work, we characterize the relationship between the effective number of bits (ENOB) of precision of analog processors, which is limited by noise, and digital bit precision for quantized neural networks. We propose extending analog computing architectures to support dynamic levels of precision by repeating operations and averaging the result, decreasing the impact of noise. To utilize dynamic precision, we propose a method for learning the precision of each layer of a pre-trained model without retraining network weights. We evaluate this method on analog architectures subject to shot noise, thermal noise, and weight noise and find that employing dynamic precision reduces energy consumption by up to 89% for computer vision models such as Resnet50 and by 24% for natural language processing models such as BERT. In one example, we apply dynamic precision to a shot-noise limited homodyne optical neural network and simulate inference at an optical energy consumption of 2.7 aJ/MAC for Resnet50 and 1.6 aJ/MAC for BERT with <2% accuracy degradation, implying that the optical energy consumption is unlikely to be the dominant cost.