Loading Now

Summary of Loretta: Low-rank Economic Tensor-train Adaptation For Ultra-low-parameter Fine-tuning Of Large Language Models, by Yifan Yang et al.


LoRETTA: Low-Rank Economic Tensor-Train Adaptation for Ultra-Low-Parameter Fine-Tuning of Large Language Models

by Yifan Yang, Jiajun Zhou, Ngai Wong, Zheng Zhang

First submitted to arxiv on: 18 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
LoRETTA is an ultra-parameter-efficient framework that significantly reduces trainable parameters through tensor-train decomposition for fine-tuning Large Language Models (LLMs). The proposed method, {LoRETTA}{adp} and {LoRETTA}{rep}, employs tensorized adapters or weight parameterization with small tensor factors to achieve comparable or better performance than most widely used PEFT methods. LoRETTA can reduce trainable parameters by up to 100on the LLaMA-2-7B models, improving training efficiency and multi-task learning performance while enhancing anti-overfitting capability. The framework will be released with plug-and-play codes built upon the Huggingface framework and PEFT library.
Low GrooveSquid.com (original content) Low Difficulty Summary
Researchers developed a new way to make language models more efficient without losing their ability to perform well. They called it LoRETTA. This method helps reduce the number of parameters that need to be trained, making it faster and more efficient. The results show that LoRETTA can work as well or even better than other methods while using fewer parameters. It also improves how well language models learn from multiple tasks and reduces the risk of overfitting.

Keywords

* Artificial intelligence  * Fine tuning  * Llama  * Multi task  * Overfitting  * Parameter efficient