Summary of A Gan-based Data Poisoning Framework Against Anomaly Detection in Vertical Federated Learning, by Xiaolin Chen et al.
A GAN-based data poisoning framework against anomaly detection in vertical federated learning
by Xiaolin Chen, Daoguang Zan, Wei Li, Bei Guan, Yongji Wang
First submitted to arxiv on: 17 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In vertical federated learning (VFL), multiple parties collaborate on training a shared model while preserving data privacy. However, malicious participants can compromise this process by launching poisoning attacks to degrade the model’s performance. The key challenge is identifying a target model without access to the server-side top model. To address this issue, researchers propose P-GAN, an innovative end-to-end poisoning framework that involves semi-supervised learning and GAN-based methods to create adversarial perturbations. This approach can be tailored for VFL scenarios. Additionally, a deep auto-encoder (DAE) is developed as a robust defense mechanism against VFL attacks. Experimental results demonstrate the effectiveness of P-GAN and DAE in combating poisoning attacks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine multiple companies working together to train a shared model while keeping their own data private. What if one company tried to ruin the model by messing with its training? To prevent this, researchers created a new way called P-GAN that makes it hard for attackers to succeed. They did this by making a fake target model and adding special noise to make it worse. Then, they used a special algorithm called a deep auto-encoder (DAE) to detect when someone is trying to attack the model. The results show that P-GAN and DAE are good at stopping these attacks. |
Keywords
* Artificial intelligence * Encoder * Federated learning * Gan * Semi supervised