Summary of On Fairness Of Low-rank Adaptation Of Large Models, by Zhoujie Ding and Ken Ziyu Liu and Pura Peetathawatchai and Berivan Isik and Sanmi Koyejo
On Fairness of Low-Rank Adaptation of Large Models
by Zhoujie Ding, Ken Ziyu Liu, Pura Peetathawatchai, Berivan Isik, Sanmi Koyejo
First submitted to arxiv on: 27 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the fairness implications of low-rank adaptation (LoRA) on large models, particularly in comparison to full-model fine-tuning. LoRA’s efficiency has made it a popular choice among practitioners, but its effects on utility, calibration, and resistance to membership inference across different subgroups are not well understood. The study presents extensive experiments across vision and language domains, classification, and generation tasks using various models, including ViT-Base, Swin-v2-Large, Llama-2 7B, and Mistral 7B. Surprisingly, the results show that while LoRA can sometimes exacerbate model bias, it often has equivalent or improved fairness compared to the base model or its full fine-tuning baseline. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary LoRA is a way to make big models work better on smaller computers. Some people use it instead of fully updating the models because it’s faster and cheaper. But does it make the models unfair? The researchers looked at this question by comparing LoRA with fully updating the models. They tested many different types of tasks, like recognizing pictures or generating text, using big models like ViT-Base and Llama-2 7B. What they found was that sometimes LoRA makes the models worse, but often it makes them better. |
Keywords
» Artificial intelligence » Classification » Fine tuning » Inference » Llama » Lora » Low rank adaptation » Vit