Loading Now

Summary of Parallelizing Linear Transformers with the Delta Rule Over Sequence Length, by Songlin Yang et al.


Parallelizing Linear Transformers with the Delta Rule over Sequence Length

by Songlin Yang, Bailin Wang, Yu Zhang, Yikang Shen, Yoon Kim

First submitted to arxiv on: 10 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a hardware-efficient algorithm for training linear transformers with the delta rule, called DeltaNet. The goal is to create a model that can perform in-context retrieval tasks efficiently while still being able to learn complex patterns like transformers with softmax attention. To achieve this, the authors develop a memory-efficient representation for computing products of Householder matrices, allowing them to scale up the training of DeltaNet models to standard language modeling settings. The resulting 1.3B model outperforms recent linear-time baselines in terms of perplexity and zero-shot performance on downstream tasks. Additionally, the paper explores hybrid models that combine DeltaNet layers with sliding-window attention or global attention, which further improves performance compared to strong transformer baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper creates a new way to train machines so they can quickly understand and remember language patterns. The old ways didn’t work well for big tasks, but this new method uses special math tricks to make it faster. The team trained a super-sized model that did really well on lots of different tests. They also tried mixing their new approach with some older ideas, which made the results even better than strong machines from before.

Keywords

» Artificial intelligence  » Attention  » Perplexity  » Softmax  » Transformer  » Zero shot