Summary of Anthroscore: a Computational Linguistic Measure Of Anthropomorphism, by Myra Cheng et al.
AnthroScore: A Computational Linguistic Measure of Anthropomorphism
by Myra Cheng, Kristina Gligoric, Tiziano Piccardi, Dan Jurafsky
First submitted to arxiv on: 3 Feb 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Anthropomorphism, or the attribution of human-like characteristics to non-human entities, has shaped conversations about the impacts and possibilities of technology. The authors present AnthroScore, an automatic metric that quantifies how non-human entities are implicitly framed as human by surrounding context. This metric corresponds with human judgments of anthropomorphism and dimensions described in social science literature. To analyze the impact of anthropomorphism on scientific discourse, the authors apply AnthroScore to 15 years of research papers and news articles. In research papers, they find that anthropomorphism has steadily increased over time, particularly in language model-related papers. This mirrors neural advancements in ACL papers. News headlines exhibit higher levels of anthropomorphism compared to the research papers they cite. Since AnthroScore is lexicon-free, it can be directly applied to various text sources. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Anthropomorphism makes technology seem more like us, but how does this affect what we read and learn? A new tool called AnthroScore helps measure when language gives human-like qualities to things that aren’t humans. This matters because scientists and journalists should be careful not to mislead people with their words. The researchers used AnthroScore to look at 15 years of science papers and news articles. They found that these papers have become more “human-like” over time, especially those about language models. News headlines also had higher levels of anthropomorphism than the scientific papers they came from. This new tool can help us understand how language affects our perceptions. |
Keywords
» Artificial intelligence » Discourse » Language model