Loading Now

Summary of Trust and Terror: Hazards in Text Reveal Negatively Biased Credulity and Partisan Negativity Bias, by Keith Burghardt et al.


Trust and Terror: Hazards in Text Reveal Negatively Biased Credulity and Partisan Negativity Bias

by Keith Burghardt, Daniel M.T. Fessler, Chyna Tang, Anne Pisor, Kristina Lerman

First submitted to arxiv on: 28 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel neural network model is developed to detect information concerning hazards within social media text, a feature often overlooked in sentiment analysis. The model is trained on a collection of annotated posts and urban legends, outperforming human annotator proxies like GPT-4. The extracted hazard information is not strongly correlated with other indicators like moral outrage, sentiment, emotions, or threat words, but does correlate positively with fear and negatively with joy. The model is applied to three datasets: COVID-19-related posts, posts about the 2023 Hamas-Israel war, and a new collection of urban legends. The results show unique words associated with hazards in each dataset, as well as differences between groups like conservatives and liberals. Hazard information peaks in frequency after major hazard events, serving as an automated indicator.
Low GrooveSquid.com (original content) Low Difficulty Summary
Hazards in social media text are often overlooked, but they’re important to understand. A new model is developed to find these hazards, using a special type of artificial intelligence called a neural network. The model looks at many posts and stories about urban legends to learn what makes them unique. It’s better than humans at finding these hazards, even when it’s not explicitly stated. The results show that different groups of people use different words to talk about hazards, and that the frequency of these words changes after big events happen.

Keywords

* Artificial intelligence  * Gpt  * Neural network