Abstract:
This paper presents the Eidetic architecture, which is an SRAM-based ASIC neural network accelerator that eliminates the need to continuously load weights from off-chip, while also minimizing the need to go off chip for intermediate results. Using in-situ arithmetic in the SRAM arrays, this architecture can supports a variety of precision types allowing for effective inference. We also present different data mapping policies for matrix-vector based networks (RNN and MLP) on the Eidetic architecture and describe the tradeoffs involved. With this architecture, multiple layers of a network can be concurrently mapped, storing both the layer weights and intermediate results on-chip, removing the energy and latency penalty of off-chip memory accesses. We evaluate Eidetic on Google's Neural Machine Translation System (GNMT) encoder and demonstrate a 17.20× increase in throughput and 7.77× reduction in average latency over a single TPUv2 chip.