Summary of Joint Probability Selection and Power Allocation For Federated Learning, by Ouiame Marnissi et al.
Joint Probability Selection and Power Allocation for Federated Learning
by Ouiame Marnissi, Hajar EL Hammouti, El Houcine Bergou
First submitted to arxiv on: 15 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates federated learning over wireless networks where devices with limited energy budgets train machine learning models. The key challenge is selecting clients for each round to optimize model performance. Most existing approaches rely on deterministic methods, leading to complex optimization problems solved using heuristics. This study proposes a probabilistic approach that jointly selects clients and allocates power optimally to maximize the expected number of participating clients. A novel alternating algorithm is developed to solve this problem, yielding closed-form solutions for user selection probabilities and power allocations. Numerical results demonstrate significant performance improvements in energy consumption, completion time, and accuracy compared to benchmarks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how devices with limited energy can work together to train a machine learning model over the internet. The main challenge is choosing which devices to involve each round so that the model gets better. Most current methods use simple rules to pick devices, making it hard to get good results. This research comes up with a new way of doing this by combining two things: picking the right devices and deciding how much power they should use. It also develops an algorithm to solve this problem, giving exact answers for which devices to choose and how much power to give them. The results show that this approach works better than previous ones in using energy, finishing tasks quickly, and getting accurate results. |
Keywords
* Artificial intelligence * Federated learning * Machine learning * Optimization