Summary of Elastiformer: Learned Redundancy Reduction in Transformer Via Self-distillation, by Junzhang Liu et al.
ElastiFormer: Learned Redundancy Reduction in Transformer via Self-Distillation
by Junzhang Liu, Tingkai Liu, Yueyuan Sui, Stephen Xia
First submitted to arxiv on: 22 Nov 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel post-training technique called ElastiFormer is introduced to adapt pretrained Transformer models into an elastic counterpart with variable inference time compute. This technique, which adds minimal trainable parameters (0.00006%), dynamically selects subsets of network parameters and input tokens for each layer based on the input. Self-distillation losses are used to train the routing modules, minimizing differences between the pretrained model’s output and its elastic counterpart. ElastiFormer can be applied to various modalities, including language modeling, image modeling, and visual-language modeling tasks. The technique achieves compute savings of 20-50% for different Transformer layer components and is robust against training domain changes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary ElastiFormer is a new way to make computer models work faster or slower depending on the job they need to do. It takes an already trained model, like a language translator, and makes it “elastic” so it can change how fast it processes information. This helps save computer power, which is important because computers are getting more powerful but also getting hotter and using more energy. ElastiFormer works with different types of models that process text, images, or both. It’s like a special training program that helps the model learn to be efficient with its computing resources. |
Keywords
» Artificial intelligence » Distillation » Inference » Transformer