Summary of Enhancing Federated Learning Convergence with Dynamic Data Queue and Data Entropy-driven Participant Selection, by Charuka Herath et al.
Enhancing Federated Learning Convergence with Dynamic Data Queue and Data Entropy-driven Participant Selection
by Charuka Herath, Xiaolan Liu, Sangarapillai Lambotharan, Yogachandran Rahulamathavan
First submitted to arxiv on: 23 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed Federated Learning (FL) method addresses statistical complexity in decentralized model training on edge devices. To tackle this issue, researchers developed Dynamic Data queue-driven Federated Learning (DDFL), which creates a global subset of data on the server and distributes it dynamically across devices. This approach improves convergence by selecting reasonable device subsets for aggregation based on Data Entropy metrics. The method is evaluated using MNIST, CIFAR-10, and CIFAR-100 datasets, showing an accuracy boost of around 5% to 20% compared to state-of-the-art aggregation algorithms. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated Learning helps devices learn together without sharing their data. But what happens when the data on each device is different? This can cause problems, like reducing accuracy by up to 30%. The researchers found that this happens because the devices’ weights (how much they trust certain data) get too far apart. They created a new way to distribute data and choose which devices to use for aggregation, making it better at finding the best solution. They tested their method on three datasets and found that it improved accuracy by around 5% to 20%. |
Keywords
» Artificial intelligence » Federated learning