Summary of Research on Predicting Public Opinion Event Heat Levels Based on Large Language Models, by Yi Ren et al.
Research on Predicting Public Opinion Event Heat Levels Based on Large Language Models
by Yi Ren, Tianyi Zhang, Weibin Li, DuoMu Zhou, Chenhao Qin, FangCheng Dong
First submitted to arxiv on: 27 Sep 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed study develops a novel method for predicting public opinion event heat levels using large language models. The approach involves preprocessing and classifying Chinese hot events data, clustering events into four heat levels using MiniBatchKMeans, and evaluating various language models’ accuracy in predicting event heat levels with or without reference cases. The results show that GPT-4o and DeepseekV2 performed well in the latter scenario, achieving prediction accuracies of 41.4% and 41.5%, respectively. While the overall prediction accuracy remains relatively low, the study highlights the potential for large language models to improve public opinion event heat level prediction with a more robust dataset. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This study uses big language models to predict how hot events are in terms of public opinion. It takes a lot of data from China and groups it into four categories based on how much people talked about each event online. The researchers then test different language models’ ability to guess which category an event belongs to, with or without examples to follow. They found that two specific models, GPT-4o and DeepseekV2, did a great job when given examples to work from. While the overall results aren’t super accurate yet, they show promise for using big language models to understand public opinion in the future. |
Keywords
» Artificial intelligence » Clustering » Gpt