Loading Now

Summary of Tailoring Vaccine Messaging with Common-ground Opinions, by Rickard Stureborg et al.


Tailoring Vaccine Messaging with Common-Ground Opinions

by Rickard Stureborg, Sanxing Chen, Ruoyu Xie, Aayushi Patel, Christopher Li, Chloe Qinyu Zhu, Tingnan Hu, Jun Yang, Bhuwan Dhingra

First submitted to arxiv on: 17 May 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents a novel approach to personalizing chatbot interactions by establishing common ground with the intended reader. Specifically, it focuses on addressing concerns about vaccination and misinformation, where mutual understanding is crucial. The authors introduce the concept of Common-Ground Opinion (CGO) and propose a dataset, TAILOR-CGO, for evaluating how well responses are tailored to provided CGOs. They benchmark several major language models, finding that GPT-4-Turbo outperforms others on this task. Additionally, they develop automatic evaluation metrics using BERT, which outperforms finetuned language models. The authors investigate how to successfully tailor vaccine messaging to CGOs and provide actionable recommendations.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making chatbots more helpful by understanding what people already believe. They want to use this idea to help fix the problem of false information about vaccines. Right now, it’s hard for chatbots to understand people’s opinions because those opinions don’t always match up with each other. The authors created a special dataset called TAILOR-CGO to test how well different language models can respond to people’s beliefs. They found that one model, GPT-4-Turbo, does better than others at this task. They also developed new ways to measure how good the responses are and used these methods to investigate how to make chatbots more helpful when talking about vaccines.

Keywords

» Artificial intelligence  » Bert  » Gpt