Loading Now

Summary of Peak-controlled Logits Poisoning Attack in Federated Distillation, by Yuhan Tang et al.


Peak-Controlled Logits Poisoning Attack in Federated Distillation

by Yuhan Tang, Aoxu Zhang, Zhiyuan Wu, Bo Gao, Tian Wen, Yuwei Wang, Sheng Sun

First submitted to arxiv on: 25 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes an advanced attack method called Peak-Controlled Federated Distillation Logits Attack (PCFDLA) that targets Federated Distillation (FD), a distributed machine learning approach. PCFDLA manipulates logits communication to mislead and degrade the performance of client models, while maintaining stealthiness. The authors introduce a novel metric for evaluating attack efficacy and demonstrate the superior impact of PCFDLA on model accuracy compared to previous attacks. Experimental results across various datasets confirm the effectiveness of PCFDLA in federated distillation systems.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about a new way to hack into a special kind of computer learning that happens between many devices. This “Federated Distillation” lets devices learn from each other without sharing all their secrets. But some bad actors might try to ruin this process by sending fake information. The authors of the paper are trying to stop these bad guys by creating an even sneakier way for them to attack. They’re calling it Peak-Controlled Federated Distillation Logits Attack, or PCFDLA for short. It’s like a super-stealthy ninja that can ruin the learning process without anyone noticing.

Keywords

* Artificial intelligence  * Distillation  * Logits  * Machine learning