Summary of Logits Poisoning Attack in Federated Distillation, by Yuhan Tang et al.
Logits Poisoning Attack in Federated Distillation
by Yuhan Tang, Zhiyuan Wu, Bo Gao, Tian Wen, Yuwei Wang, Sheng Sun
First submitted to arxiv on: 8 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In Federated Distillation (FD), a distributed machine learning paradigm is used to transfer knowledge between devices without sharing raw data. This approach optimizes local models with distillation, preserving client data and reducing the need for uploading large model parameters. While FD has gained popularity, there is a lack of research on poisoning attacks within this framework, which can leave devices vulnerable to malicious actions. To address this gap, we introduce FDLA, a poisoning attack method tailored for FD that manipulates logit communications to mislead sample discrimination and degrade client model accuracy. Our experiments demonstrate the effectiveness of FDLA in compromising model performance across various datasets and settings. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary FD is a way to share knowledge between devices without sharing data. This helps protect private information while still learning from others. FD uses distillation, which makes models better by comparing them with each other. Without understanding attacks on this system, it’s hard to keep devices safe. We created FDLA, an attack method that tricks devices into making mistakes. Our tests show that FDLA is very good at breaking models and makes them perform poorly. |
Keywords
* Artificial intelligence * Distillation * Machine learning