Summary of Enhancing Reliability Of Neural Networks at the Edge: Inverted Normalization with Stochastic Affine Transformations, by Soyed Tuhin Ahmed et al.
Enhancing Reliability of Neural Networks at the Edge: Inverted Normalization with Stochastic Affine Transformations
by Soyed Tuhin Ahmed, Kamal Danouchi, Guillaume Prenat, Lorena Anghel, Mehdi B. Tahoori
First submitted to arxiv on: 23 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Hardware Architecture (cs.AR); Emerging Technologies (cs.ET)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Bayesian Neural Networks (BayNNs) naturally provide uncertainty in their predictions, making them suitable for safety-critical applications. Their realization using memristor-based in-memory computing (IMC) architectures enables them for resource-constrained edge applications. Additionally, predictive uncertainty is important, but it’s not enough; the networks must also be inherently robust to noise in computation. Memristor-based IMCs are susceptible to manufacturing and runtime variations, drift, and failure, which can significantly reduce inference accuracy. To enhance the robustness and inference accuracy of BayNNs deployed in IMC architectures, we propose a novel normalization layer combined with stochastic affine transformations. Empirical results show a graceful degradation in inference accuracy, with an improvement of up to 58.11%. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Bayesian Neural Networks are special kinds of artificial intelligence that can predict things while also telling you how sure they are. They’re good for important jobs where things might go wrong. To make them work on tiny devices like smartphones or smart home gadgets, we use a special kind of computer chip called a memristor. But these chips can get old and stop working well, which is bad news for the AI. In this paper, scientists came up with a way to make the AI better at dealing with these problems. They did it by adding some new steps to the way the AI processes information. It worked! The AI got better at doing its job even when things went wrong. |
Keywords
* Artificial intelligence * Inference