Summary of Semidfl: a Semi-supervised Paradigm For Decentralized Federated Learning, by Xinyang Liu et al.
SemiDFL: A Semi-Supervised Paradigm for Decentralized Federated Learning
by Xinyang Liu, Pengchao Han, Xuan Li, Bo Liu
First submitted to arxiv on: 18 Dec 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Decentralized federated learning (DFL) is a cooperative model training approach that enables clients to train models without relying on a central server. This paper focuses on semi-supervised learning (SSL) in DFL, where clients may have varying data sources including labeled and unlabeled samples. The authors propose SemiDFL, the first semi-supervised DFL method that enhances performance by establishing a consensus in both data and model spaces. SemiDFL utilizes neighborhood information to improve pseudo-labeling and designs a consensus-based diffusion model to generate synthesized data. Additionally, an adaptive aggregation method is developed to further enhance performance. The authors demonstrate the superiority of the proposed method over existing centralized federated learning (CFL) and DFL schemes in both IID and non-IID SSL scenarios. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Imagine a way for devices or computers to work together and learn new things without having a central leader. This paper is about making this idea, called decentralized federated learning, even better when we don’t have enough labeled data. It’s like trying to figure out what’s in a picture without knowing what it really looks like. The authors come up with a new method that helps devices work together and learn from each other more effectively. They test their method on different types of data and show that it performs better than existing methods. |
Keywords
» Artificial intelligence » Diffusion model » Federated learning » Semi supervised