Summary of Decoupled Vertical Federated Learning For Practical Training on Vertically Partitioned Data, by Avi Amalanshu et al.
Decoupled Vertical Federated Learning for Practical Training on Vertically Partitioned Data
by Avi Amalanshu, Yash Sirvi, David I. Inouye
First submitted to arxiv on: 6 Mar 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Vertical Federated Learning (VFL) enables collaborative learning among clients with disjoint features of common entities. However, traditional VFL lacks fault tolerance, where each participant and connection is a single point of failure. To address this limitation, we propose Decoupled VFL (DVFL), which decouples training between communication rounds using local unsupervised objectives and enables redundant aggregators. As secondary benefits, DVFL can enhance data efficiency and provides immunity against gradient-based attacks. We implement DVFL for split neural networks with a self-supervised autoencoder loss and demonstrate its effectiveness on an MNIST task (97.58% vs 96.95%). Even under perfect conditions, performance is comparable. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated Learning is when computers share data to work together on a common problem. But what happens if one of these computers stops working? This can cause problems for the whole group. To solve this issue, we created Decoupled Federated Learning, which allows the other computers to keep working even if one fails. This also helps make the system more secure and efficient. We tested our new method on a simple task and found that it worked well, even when everything was going smoothly. |
Keywords
* Artificial intelligence * Autoencoder * Federated learning * Self supervised * Unsupervised