Summary of Fedhide: Federated Learning by Hiding in the Neighbors, By Hyunsin Park and Sungrack Yun
FedHide: Federated Learning by Hiding in the Neighbors
by Hyunsin Park, Sungrack Yun
First submitted to arxiv on: 12 Sep 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers propose a novel protocol-based federated learning method that enables clients to learn embedding networks in classification or verification tasks while maintaining privacy. The key challenge is developing an embedding network that can distinguish between classes without sharing true class prototypes, which could compromise sensitive information. To address this issue, the authors introduce proxy class prototypes generated by linearly combining them with their nearest neighbors. This approach conceals the true class prototype and allows clients to learn discriminative embedding networks. The method is compared to alternative techniques, such as adding random Gaussian noise and using random selection with cosine similarity constraints. The authors also evaluate the robustness of their approach against gradient inversion attacks and introduce a measure for prototype leakage. Empirical results on three benchmark datasets (CIFAR-100, VoxCeleb1, and VGGFace2) demonstrate the effectiveness of this method. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper introduces a new way to share information between devices while keeping it private. Imagine you have several friends who each have a small piece of information about someone’s identity. You want to combine that information to get a better picture of who they are, but you don’t want anyone to figure out the individual pieces of information. The authors propose a method called “proxy class prototypes” that does just this. They show how their method can be used in different situations and compare it to other approaches. They also test its performance on three real-world datasets. |
Keywords
» Artificial intelligence » Classification » Cosine similarity » Embedding » Federated learning