Loading Now

Summary of Let’s Focus: Focused Backdoor Attack Against Federated Transfer Learning, by Marco Arazzi et al.


Let’s Focus: Focused Backdoor Attack against Federated Transfer Learning

by Marco Arazzi, Stefanos Koffas, Antonino Nocera, Stjepan Picek

First submitted to arxiv on: 30 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Federated Transfer Learning (FTL) is a distributed learning paradigm that combines feature learning and classifier training. In FTL, a server pre-trains a feature extractor on publicly shared data, then clients locally train classification layers on private datasets. This setup makes it challenging to develop backdoor attacks, as the learned features are fixed after the pre-training step. However, this paper investigates an intriguing scenario where a client can exploit explainable AI (XAI) and dataset distillation to create a focused backdoor attack. The proposed FB-FTL approach involves identifying optimal local triggers through XAI and encapsulating compressed information of the backdoor class. Experimental results demonstrate the effectiveness of FB-FTL, achieving an average 80% attack success rate against existing Federated Learning defenses in an image classification scenario.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine a way for different devices to learn together without sharing their data. This is called Federated Transfer Learning (FTL). It’s hard to create fake attacks in FTL because the features learned by one device can’t be changed. But this paper finds a way to do it! They use a technique called explainable AI and dataset distillation to make an attack that works well. In their test, they were able to trick 80% of devices into making wrong predictions.

Keywords

» Artificial intelligence  » Classification  » Distillation  » Federated learning  » Image classification  » Transfer learning