Loading Now

Summary of How to Collaborate: Towards Maximizing the Generalization Performance in Cross-silo Federated Learning, by Yuchang Sun and Marios Kountouris and Jun Zhang


How to Collaborate: Towards Maximizing the Generalization Performance in Cross-Silo Federated Learning

by Yuchang Sun, Marios Kountouris, Jun Zhang

First submitted to arxiv on: 24 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper presents a novel approach to federated learning (FL) for cross-silo settings, where clients become model owners after training. The authors derive a generalization bound for each client, showing that collaboration with other clients can improve model performance only when they have more training data and similar data distributions. They formulate a client utility maximization problem by partitioning clients into collaborating groups using hierarchical clustering. The proposed scheme, HCCT, converges for non-convex loss functions and outperforms baseline schemes in simulations.
Low GrooveSquid.com (original content) Low Difficulty Summary
In this research, scientists are exploring how to make artificial intelligence (AI) work better together with different devices and data sources. They want to figure out how to improve the performance of AI models when they’re trained on diverse datasets from various places. The team came up with a new way to do this called hierarchical clustering-based collaborative training (HCCT). This approach groups similar devices together, which helps the AI model work better. They tested it and found that HCCT does indeed make the AI model perform better in some cases.

Keywords

* Artificial intelligence  * Federated learning  * Generalization  * Hierarchical clustering