Loading Now

Summary of Accuracy Is Not All You Need, by Abhinav Dutta et al.


Accuracy is Not All You Need

by Abhinav Dutta, Sanjeev Krishnan, Nipun Kwatra, Ramachandran Ramjee

First submitted to arxiv on: 12 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates the impact of compressing Large Language Models (LLMs) using techniques like quantization. Typically, compressed models’ accuracy is measured on various benchmarks to ensure negligible degradation compared to the baseline model. However, the authors observe a phenomenon called “flips” where answers change from correct to incorrect and vice versa, even when accuracy is similar. The study evaluates metrics across multiple compression techniques, models, and datasets, demonstrating that compressed models can behave significantly differently than the baseline model, despite similar accuracy. Furthermore, the paper proposes two new distance metrics, KL-Divergence and flips, which are well-correlated and show that compressed models perform worse than baselines in free-form generative tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research looks at what happens when we make Large Language Models smaller using special techniques. Usually, we check how accurate the small model is by comparing it to the original big model. But this study finds that even when the small model is just as good as the big one, it can still give different answers sometimes. This is important because people might use these small models in real-life applications, and we need to understand what they’re capable of. The researchers propose two new ways to measure how well these small models work, which shows that they’re not as good as the original model.

Keywords

* Artificial intelligence  * Quantization