Summary of Chain-of-planned-behaviour Workflow Elicits Few-shot Mobility Generation in Llms, by Chenyang Shao et al.
Chain-of-Planned-Behaviour Workflow Elicits Few-Shot Mobility Generation in LLMs
by Chenyang Shao, Fengli Xu, Bingbing Fan, Jingtao Ding, Yuan Yuan, Meng Wang, Yong Li
First submitted to arxiv on: 15 Feb 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses the limitations of large language models (LLMs) in generating human behavior. While LLMs excel in reasoning tasks, their performance in behavioral generation is not yet well understood. The authors propose a novel workflow called Chain-of-Planned Behaviour (CoPB), inspired by the Theory of Planned Behaviour (TPB). CoPB integrates cognitive structures from TPB, such as attitude, subjective norms, and perceived behavior control, to enhance LLMs’ ability to reason about human intentions. Experimental results show that CoPB significantly reduces error rates in mobility intention generation. To improve scalability, the authors explore synergies between LLMs and mechanistic models, such as gravity models. By integrating CoPB with gravity models, they achieve better performance while reducing token costs by 97.7%. The proposed workflow also enables automatic label generation for fine-tuning smaller-scale models like LLaMA 3-8B. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how big language models can be used to generate human behavior. Right now, these models are really good at doing tasks that require thinking and problem-solving, but they’re not great at understanding why people make certain choices or decisions. The authors of this paper came up with a new way to use these models, called Chain-of-Planned Behaviour (CoPB). CoPB is based on an idea called the Theory of Planned Behaviour, which says that our choices are influenced by things like what we think about something, what other people think about it, and whether we feel in control. The authors show that using this approach with big language models can help them make better predictions about human behavior. They also found a way to use smaller, more affordable versions of these models while still getting good results. |
Keywords
» Artificial intelligence » Fine tuning » Llama » Token