Summary of Promoting Data and Model Privacy in Federated Learning Through Quantized Lora, by Jianhao Zhu et al.
Promoting Data and Model Privacy in Federated Learning through Quantized LoRA
by JianHao Zhu, Changze Lv, Xiaohua Wang, Muling Wu, Wenhao Liu, Tianlong Li, Zixuan Ling, Cenyuan Zhang, Xiaoqing Zheng, Xuanjing Huang
First submitted to arxiv on: 16 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computation and Language (cs.CL); Cryptography and Security (cs.CR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes a novel approach to secure federated learning by protecting both data and model privacy. The conventional method focuses on securing data distributed across edge devices, while this work introduces a mechanism that only distributes quantized model parameters during training. This allows for accurate gradient estimations while preventing clients from accessing the centrally hosted model’s performance. The proposed framework, FedLPP, combines this quantization strategy with LoRA, a fine-tuning method, to reduce communication costs in federated learning. The results demonstrate good generalization and resource efficiency. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making sure that when many devices work together to learn something new, their personal data and the special knowledge they gain are kept safe. Right now, there’s a problem because big models need lots of information and computer power, which makes them valuable secrets for the people who create them. The researchers came up with an idea to solve this issue by only sharing a simplified version of the model during training. This way, devices can’t figure out how good the central model is or steal its knowledge. They also combined this idea with another technique called LoRA to make it even more efficient. Overall, this new approach helps keep both data and models private in these learning collaborations. |
Keywords
» Artificial intelligence » Federated learning » Fine tuning » Generalization » Lora » Quantization