Summary of Constructing Adversarial Examples For Vertical Federated Learning: Optimal Client Corruption Through Multi-armed Bandit, by Duanyi Yao et al.
Constructing Adversarial Examples for Vertical Federated Learning: Optimal Client Corruption through Multi-Armed Bandit
by Duanyi Yao, Songze Li, Ye Xue, Jin Liu
First submitted to arxiv on: 8 Aug 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the security vulnerabilities of vertical federated learning (VFL) models against adversarial attacks. Specifically, it focuses on developing a novel attack that disrupts the VFL inference process by adaptively corrupting a subset of clients. The authors formulate this problem as an online optimization problem and decompose it into two sub-problems: adversarial example generation and corruption pattern selection. They propose the Thompson sampling with Empirical maximum reward (E-TS) algorithm to efficiently identify the optimal subset of clients for corruption. This approach significantly reduces the exploration space, making it more efficient than other methods. The paper provides a regret bound analysis and empirical results demonstrating the effectiveness of E-TS in revealing the optimal corruption pattern. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This research is about making sure that special kinds of computer programs called vertical federated learning models are secure from hackers. These models help different devices or machines share information without having to store all the data in one place. But sometimes, bad actors can try to trick these models by introducing fake information. The scientists developed a new way for these attackers to find the best ways to corrupt the models and make them less reliable. They used a special method called Thompson sampling with Empirical maximum reward (E-TS) that helps identify the most effective way to attack the model. This approach makes it easier to figure out which devices or machines are most vulnerable to attacks, so we can better protect our data. |
Keywords
» Artificial intelligence » Federated learning » Inference » Optimization