Summary of Poisoning Attacks on Federated Learning For Autonomous Driving, by Sonakshi Garg et al.
Poisoning Attacks on Federated Learning for Autonomous Driving
by Sonakshi Garg, Hugo Jönsson, Gustav Kalander, Axel Nilsson, Bhhaanu Pirange, Viktor Valadi, Johan Östman
First submitted to arxiv on: 2 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Medium Difficulty summary: Federated Learning (FL) is a decentralized learning paradigm that enables parties to collaboratively train models while keeping their data confidential. This paper introduces two novel poisoning attacks on FL tailored to regression tasks in autonomous driving: FLStealth and Off-Track Attack (OTA). FLStealth aims at deteriorating the global model performance by providing benign-looking updates, while OTA targets changing the global model’s behavior when exposed to a specific trigger. The effectiveness of these attacks is demonstrated through comprehensive experiments on vehicle trajectory prediction. Specifically, FLStealth is shown to be the most successful among five untargeted attacks in bypassing defenses employed by the server, highlighting the need for new defensive mechanisms against targeted attacks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Low Difficulty summary: This paper is about a way to attack a system that helps self-driving cars learn from each other without sharing their data. The authors found two ways to make this system worse – FLStealth and OTA. FLStealth makes the global model perform poorly, while OTA changes how the model behaves when it sees something specific. The experiments show that these attacks work well and can’t be easily fixed. This is important because we need to find ways to keep the system safe from these kinds of attacks. |
Keywords
» Artificial intelligence » Federated learning » Regression