Loading Now

Summary of Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning, by Chenan Wang et al.


Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning

by Chenan Wang, Pu Zhao, Siyue Wang, Xue Lin

First submitted to arxiv on: 30 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to enhance the resilience of Deep Neural Network (DNN) models against Fault Injection Attacks (FIAs). Fias manipulate model parameters to disrupt inference execution, causing performance degradation. The authors introduce Contrastive Learning (CL) into the DNN training and inference pipeline, enabling self-resilience under FIAs. The proposed CL-based FIA Detection and Recovery (CFDR) framework detects FIAs in real-time using a single batch of testing data and recovers effectively even with limited unlabeled testing data. Experimental results on the CIFAR-10 dataset demonstrate promising detection and recovery effectiveness against various types of FIAs.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine if someone tried to sabotage your computer by making it do silly things or get confused easily. That’s kind of like what hackers might try to do with Deep Neural Networks, which are super powerful computers that can recognize things like faces or objects. This paper is about finding ways to make these networks more resistant to attacks like this. The researchers use a special technique called Contrastive Learning to train the networks so they can detect and recover from such attacks quickly and efficiently. They tested it with some real data and showed that it works pretty well!

Keywords

* Artificial intelligence  * Inference  * Neural network