Summary of Quriosity: Analyzing Human Questioning Behavior and Causal Inquiry Through Curiosity-driven Queries, by Roberto Ceraolo et al.
Quriosity: Analyzing Human Questioning Behavior and Causal Inquiry through Curiosity-Driven Queries
by Roberto Ceraolo, Dmitrii Kharlapenko, Ahmad Khan, Amélie Reymond, Rada Mihalcea, Bernhard Schölkopf, Mrinmaya Sachan, Zhijing Jin
First submitted to arxiv on: 30 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Machine Learning (stat.ML)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The abstract presents Quriosity, a dataset of 13.5K naturally occurring questions that reflect real-world needs and human curiosity. The dataset is comprised of queries from search engines, human-to-human conversations, and human-to-LLM interactions. Analysis reveals a significant presence of causal questions (up to 42%), which are characterized by unique linguistic properties, cognitive complexity, and source distribution. To identify these causal queries, the authors develop an iterative prompt improvement framework. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about creating a big collection of questions that people ask when they’re curious about something. These questions can be really tricky because they often don’t have clear answers or are about complex topics. The researchers created a dataset of 13,500 questions to help understand what makes these curiosity-driven questions so challenging. They also developed a new way to identify specific types of questions that are trying to figure out why something happens. This work can help improve chatbots and how we interact with them. |
Keywords
» Artificial intelligence » Prompt