Summary of Limits Of Transformer Language Models on Learning to Compose Algorithms, by Jonathan Thomm et al.
Limits of Transformer Language Models on Learning to Compose Algorithms
by Jonathan Thomm, Giacomo Camposampiero, Aleksandar Terzic, Michael Hersche, Bernhard Schölkopf, Abbas Rahimi
First submitted to arxiv on: 8 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the capabilities of Transformer language models in learning compositional discrete tasks. It evaluates training LLaMA models and prompting GPT-4 and Gemini on four tasks requiring the composition of multiple discrete sub-tasks. The results show that state-of-the-art Transformer language models are highly sample inefficient, requiring more data samples to learn a compositional task than relearning all sub-tasks from scratch. Additionally, in-context prompting with few samples is unreliable and fails at executing the sub-tasks or correcting errors in multi-round code generation. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how well big language models can learn complex tasks by combining smaller skills. It trains these models on four different tasks that require combining multiple simpler tasks. The results show that these models are not very good at learning new tasks if they don’t have a lot of data to practice with. This means that even though they’re really good at doing one thing, it’s hard for them to use that skill to do something else. |
Keywords
* Artificial intelligence * Gemini * Gpt * Llama * Prompting * Transformer