Loading Now

Summary of Harden Deep Neural Networks Against Fault Injections Through Weight Scaling, by Ninnart Fuengfusin et al.


Harden Deep Neural Networks Against Fault Injections Through Weight Scaling

by Ninnart Fuengfusin, Hakaru Tamukoh

First submitted to arxiv on: 28 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed method utilizes a simple yet effective approach to harden DNN weights, which are prone to faults caused by aging, temperature variance, and write errors. By multiplying weights by constants before storing them to fault-prone medium, the authors demonstrate that this technique can significantly improve Top-1 Accuracy of 8-bit fixed point ResNet50 models under bit-error rates as high as 0.0001. This method is particularly valuable for critical applications where DNNs are deployed on hardware devices vulnerable to faults. The proposed approach is based on the observation that errors from bit-flips have properties similar to additive noise, and dividing weights by constants can reduce the absolute error from bit-flips. The authors conduct experiments across four ImageNet 2012 pre-trained models along with three different data types: 32-bit floating point, 16-bit floating point, and 8-bit fixed point.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way is found to make sure that deep neural networks (DNNs) work correctly even when there are errors in the hardware devices they’re on. These errors can happen because of things like old age or changes in temperature, and they can cause mistakes in the DNNs. The authors of this paper propose a simple method to fix this problem by multiplying the numbers that make up the DNNs before storing them in memory. This makes it easier for errors to be corrected when they do happen. The authors tested their method on four different models of DNNs and found that it worked really well, even at high levels of error.

Keywords

» Artificial intelligence  » Temperature