Summary of Case-based or Rule-based: How Do Transformers Do the Math?, by Yi Hu et al.
Case-Based or Rule-Based: How Do Transformers Do the Math?
by Yi Hu, Xiaojuan Tang, Haotong Yang, Muhan Zhang
First submitted to arxiv on: 27 Feb 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research paper investigates the limitations of modern large language models (LLMs) in performing simple arithmetic tasks, such as addition. Despite their impressive capabilities in complex tasks, LLMs struggle with basic math problems that rely on rules rather than memorization. The authors define two reasoning mechanisms: rule-based and case-based. They find that transformers use case-based reasoning for math problems, which is similar to previous observations of subgraph matching/shortcut learning. To mitigate this limitation, the researchers propose a Rule-Following Fine-Tuning (RFFT) technique to teach LLMs to perform rule-based reasoning. Through RFFT, they successfully fine-tune LLMs to generalize better in length and achieve over 95% accuracy in addition tasks up to 12 digits. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper shows that big language models are not as good at math as humans. They can learn rules and apply them to new problems, but these models struggle with simple math problems like adding numbers together. The authors found out why this is happening and came up with a solution to make the models better. They taught the models to follow rules step by step, which helped them do math problems correctly even when they got very long. |
Keywords
» Artificial intelligence » Fine tuning