Summary of Automated Federated Pipeline For Parameter-efficient Fine-tuning Of Large Language Models, by Zihan Fang et al.
Automated Federated Pipeline for Parameter-Efficient Fine-Tuning of Large Language Models
by Zihan Fang, Zheng Lin, Zhe Chen, Xianhao Chen, Yue Gao, Yuguang Fang
First submitted to arxiv on: 9 Apr 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed FedPipe pipeline addresses the challenges of fine-tuning large language models (LLMs) on private data while minimizing training costs and inference latency. This is achieved by identifying weights to be fine-tuned based on their contributions, configuring low-rank adapters for each selected weight, and aggregating local adapters from edge servers to fine-tune the whole LLM. The pipeline also includes quantization of parameters to reduce memory space according to edge server requirements. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary FedPipe is a new way to make large language models better without sharing private data. It works by picking which parts of the model to update, making small changes locally on each computer, and then combining those updates. This makes it faster and more accurate than other methods. It’s useful for updating language models in real-world scenarios where computers have different resources. |
Keywords
* Artificial intelligence * Fine tuning * Inference * Quantization