Summary of Llm Voting: Human Choices and Ai Collective Decision Making, by Joshua C. Yang et al.
LLM Voting: Human Choices and AI Collective Decision Making
by Joshua C. Yang, Damian Dailisan, Marcin Korecki, Carina I. Hausladen, Dirk Helbing
First submitted to arxiv on: 31 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Machine Learning (cs.LG); General Economics (econ.GN)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper delves into the decision-making processes of Large Language Models (LLMs), specifically GPT-4 and LLaMA-2. The study focuses on their biases, comparing them to human voting patterns. A dataset from a human voting experiment serves as a baseline for human preferences, while corresponding experiments with LLM agents investigate how different approaches affect outcomes. The results show that the choice of voting methods and presentation order impact LLM decisions. Interestingly, varying personas can reduce some biases and improve alignment with human choices. Although the Chain-of-Thought approach didn’t enhance prediction accuracy, it has potential for AI explainability in the voting process. Furthermore, the study reveals a trade-off between preference diversity and alignment accuracy in LLMs, influenced by temperature settings. The findings suggest that LLMs might lead to less diverse collective outcomes and biased assumptions when used in voting scenarios, emphasizing the need for cautious integration of LLMs into democratic processes. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how big language models make decisions. It’s like a big experiment where we compare what these models choose with what humans would choose. We found that the way we ask the questions and present the options makes a difference in what the models decide. But, if we change the way we ask the questions to sound more like different people, it can help make the model’s choices more like human choices. This is important because big language models might be used to make decisions for us in the future, and we want to make sure they’re making good choices. |
Keywords
* Artificial intelligence * Alignment * Gpt * Llama * Temperature