Summary of Conda: Fast Federated Unlearning with Contribution Dampening, by Vikram S Chundawat et al.
ConDa: Fast Federated Unlearning with Contribution Dampening
by Vikram S Chundawat, Pushkar Niroula, Prasanna Dhungana, Stefan Schoepf, Murari Mandal, Alexandra Brintrup
First submitted to arxiv on: 5 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper introduces Contribution Dampening (ConDa), a novel framework for efficient federated unlearning in non-IID Federated Learning settings. ConDa tracks down parameters affecting the global model for each client, then performs synaptic dampening on privacy-infringing contributions from the forgetting client. This approach doesn’t require client data or retraining and incurs no computational overhead. The authors demonstrate ConDa’s effectiveness on MNIST, CIFAR10, and CIFAR100 datasets, outperforming state-of-the-art approaches by at least 100x. They also validate ConDa’s robustness against backdoor and membership inference attacks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Federated learning lets computers learn together without sharing all their data. One problem is removing a participant and the information they added to the shared model. This is called federated unlearning, and it’s hard because most methods require lots of computation or retraining. The paper introduces ConDa, a new way to do this efficiently. ConDa tracks down which parts of the model affect each computer and then reduces those parts if a computer leaves. This works fast and doesn’t need any extra data or computation. The authors tested ConDa on some datasets and showed it’s better than other methods. |
Keywords
» Artificial intelligence » Federated learning » Inference