Summary of Bounding Consideration Probabilities in Consider-then-choose Ranking Models, by Ben Aoki-sherwood et al.
Bounding Consideration Probabilities in Consider-Then-Choose Ranking Models
by Ben Aoki-Sherwood, Catherine Bregou, David Liben-Nowell, Kiran Tomlinson, Thomas Zeng
First submitted to arxiv on: 19 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Multiagent Systems (cs.MA); Econometrics (econ.EM)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A recent paper in the field of machine learning presents a novel approach to understanding how individuals make decisions. The authors propose a two-step process, where people first select alternatives to consider before making a final choice. However, inferring the unobserved consideration sets (or item consideration probabilities) is challenging due to non-identifiable models even with known item utilities. To address this issue, the researchers extend the traditional “consider then choose” model to a top-k ranking setting, where rankings are constructed according to a Plackett-Luce model after sampling a consideration set. While item consideration probabilities remain non-identified, the authors prove that knowledge of item utilities allows them to infer bounds on the relative sizes of consideration probabilities. Additionally, they derive absolute upper and lower bounds on item consideration probabilities under certain conditions. The paper provides algorithms to tighten these bounds by propagating inferred constraints. To demonstrate their methods’ effectiveness, the researchers apply their approach to a ranking dataset from a psychology experiment with two different ranking tasks (one with fixed consideration sets and one with unknown consideration sets). This combination of data allows them to estimate utilities and then learn about unknown consideration probabilities using their bounds. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary People make choices in two steps: they choose what options to consider, and then pick the best option from those considered. But how do we figure out what people are considering? The answer lies in a new approach that combines machine learning with psychology. By looking at how people rank things, we can learn about what they’re thinking when they make decisions. This approach is useful because it helps us understand why people choose one thing over another. |
Keywords
* Artificial intelligence * Machine learning