Loading Now

Summary of Dual Model Replacement:invisible Multi-target Backdoor Attack Based on Federal Learning, by Rong Wang et al.


Dual Model Replacement:invisible Multi-target Backdoor Attack based on Federal Learning

by Rong Wang, Guichen Zhou, Mingjun Gao, Yunpeng Xiao

First submitted to arxiv on: 22 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel backdoor attack method for federated learning, which combines the concepts of TrojanGan steganography, data poisoning, and model training. The authors design a TrojanGan-based backdoor trigger that can be attached to images as invisible noise, improving concealment and data transformations. They also introduce a combination trigger attack method to enable multi-backdoor triggering and enhance robustness. To overcome the limitations of local training mechanisms, the paper proposes a dual model replacement backdoor attack algorithm that improves success rates while maintaining federated learning performance. Experiments demonstrate the effectiveness of this approach in achieving high concealment, diversification of triggers, and good attack success rates.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research explores ways to secretly manipulate neural networks used for collaborative learning. The authors create a hidden “backdoor” trigger that can be added to images without detection. They then use this trigger to launch attacks on multiple targets at once. To make these attacks more effective, the researchers also develop new techniques for training and combining models. The results show that their methods are successful in hiding the attack and achieving good outcomes.

Keywords

» Artificial intelligence  » Federated learning