Summary of Federated Low-rank Adaptation with Differential Privacy Over Wireless Networks, by Tianqu Kang et al.
Federated Low-Rank Adaptation with Differential Privacy over Wireless Networks
by Tianqu Kang, Zixin Wang, Hengtao He, Jun Zhang, Shenghui Song, Khaled B. Letaief
First submitted to arxiv on: 12 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Signal Processing (eess.SP)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed federated fine-tuning (FedFT) framework combines pre-trained foundation models with low-rank adaptation (LoRA) and differential privacy (DP) over wireless networks to enable secure and efficient model updates on distributed edge devices. The split FedFT architecture partitions the model between edge devices and a central server, reducing the computational burden on individual devices while maintaining privacy guarantees. By leveraging the inherent noise in wireless transmission, the proposed framework achieves DP without adding artificial noise. Experimental results show that it outperforms baseline methods under strict privacy budgets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper proposes a new way to update models on edge devices while keeping them private and secure. They combine two techniques: fine-tuning pre-trained models and using random noise in wireless transmission. This makes it harder for attackers to steal sensitive information. The approach is tested on simulated data and shows better results than other methods. |
Keywords
» Artificial intelligence » Fine tuning » Lora » Low rank adaptation