Summary of Mars-po: Multi-agent Reasoning System Preference Optimization, by Xiaoxuan Lou et al.
Mars-PO: Multi-Agent Reasoning System Preference Optimization
by Xiaoxuan Lou, Chaojie Wang, Bo An
First submitted to arxiv on: 28 Nov 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel framework called Mars-PO is proposed to improve the mathematical reasoning capabilities of large language models (LLMs) through a multi-agent system. The auto-regressive generation process can lead to errors, hallucinations, and inconsistencies during multi-step reasoning, making high-quality performance in this domain a significant challenge. Mars-PO combines high-quality outputs from multiple agents into a hybrid positive sample set and pairs them with agent-specific negative samples to construct robust preference pairs for training. This approach achieves substantial performance improvements on mathematical reasoning benchmarks, increasing the accuracy of the state-of-the-art instruction-tuned LLM, Llama3.1-8B-Instruct, from 50.38% to 57.82%. Experimental results demonstrate the effectiveness of Mars-PO compared to other baselines. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Mars-PO is a new way to help large language models do math better. The models have trouble with math because they can make mistakes and get confused when doing steps in a problem. Mars-PO works by getting lots of good answers from different parts of the model, then using those answers to teach the model to be better at math. This helps the model do math problems more accurately. For example, it made the state-of-the-art Llama3.1-8B-Instruct model do math questions 7.44% better than before. |