Loading Now

Summary of Fedmkt: Federated Mutual Knowledge Transfer For Large and Small Language Models, by Tao Fan et al.


FedMKT: Federated Mutual Knowledge Transfer for Large and Small Language Models

by Tao Fan, Guoqiang Ma, Yan Kang, Hanlin Gu, Yuanfeng Song, Lixin Fan, Kai Chen, Qiang Yang

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed framework, FedMKT, bridges the gap in federated large language models (LLMs) by enabling mutual knowledge transfer between server-based LLMs and client-side small language models (SLMs). This approach adaptively transfers knowledge from the server’s LLM to clients’ SLMs while concurrently enriching the LLM with clients’ unique domain insights. FedMKT uses token alignment via minimum edit distance (MinED) and selective mutual knowledge transfer between client-side SLMs and a server-side LLM to enhance performance on NLP text generation tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
FedMKT is a new way for big language models and small language models to learn from each other. Right now, most research has focused on how clients can use the big models or how the big models can teach smaller ones. But what if both could help each other? That’s what FedMKT does! It helps big and small language models learn from each other at the same time. This is done by using a special alignment technique called MinED to match words between the two types of models. By doing this, FedMKT shows that it can improve the performance of both big and small language models on text generation tasks.

Keywords

» Artificial intelligence  » Alignment  » Nlp  » Text generation  » Token