Loading Now

Summary of Loki: Low-rank Keys For Efficient Sparse Attention, by Prajwal Singhania et al.


Loki: Low-rank Keys for Efficient Sparse Attention

by Prajwal Singhania, Siddharth Singh, Shwai He, Soheil Feizi, Abhinav Bhatele

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper investigates ways to reduce computational costs during inference on large language models, specifically focusing on self-attention mechanisms. The authors observe that key vectors in the attention block occupy a lower-dimensional space across various datasets and models. Building upon this finding, they propose Loki, a novel sparse attention method that leverages low-dimensional space attention scores for efficient token ranking and selection. Experimental results demonstrate that Loki outperforms other approximation methods in terms of speed and efficacy, while maintaining model accuracy.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research looks at ways to make language models work faster on computers. They found that a part of the model called self-attention is using up lots of computer power. To fix this, they discovered that some parts of the model are actually simpler than expected and can be used to speed things up. The authors created a new way to do this called Loki, which makes the language models run faster while still keeping them accurate.

Keywords

» Artificial intelligence  » Attention  » Inference  » Self attention  » Token