Abstract:
Although Resistive RAMs can support highly efficient matrix–vector multiplication, which is very useful for machine learning and other applications, the nonideal behavior of hardware, such as stuck-at fault (SAF) and IR drop is an important concern in making ReRAM crossbar array-based deep learning accelerators. Previous work has addressed the nonideality problem through either redundancy in hardware, which requires a permanent increase of hardware cost, or software retraining, which may be even more costly or unacceptable due to its need for a training dataset as well as high computation overhead. In this article, we propose a very lightweight method that can be applied on top of existing hardware or software solutions. Our method, called forward-parameter tuning (FPT), takes advantage of a certain statistical property existing in the activation data of neural network layers, and can mitigate the impact of mild nonidealities in ReRAM crossbar arrays (RCAs) for deep learning applications without using any hardware, a dataset, or gradient-based training. Our experimental results using MNIST, CIFAR-10, and CIFAR-100, and ImageNet datasets in binary and multibit networks demonstrate that our technique is very effective, both alone and together with previous methods, up to 20% fault rate, which is higher than even some of the previous remapping methods. We also evaluate our method in the presence of other nonidealities, such as variability and IR drop. Furthermore, we provide an analysis based on the concept of the effective fault rate (EFR), which not only demonstrates that EFR can be a useful tool to predict the accuracy of faulty RCA-based neural networks but also explains why mitigating the SAF problem is more difficult with multibit neural networks.