Loading Now

Summary of Designing a Dashboard For Transparency and Control Of Conversational Ai, by Yida Chen et al.


Designing a Dashboard for Transparency and Control of Conversational AI

by Yida Chen, Aoyu Wu, Trevor DePodesta, Catherine Yeh, Kenneth Li, Nicholas Castillo Marin, Oam Patel, Jan Riecke, Shivam Raval, Olivia Seow, Martin Wattenberg, Fernanda Viégas

First submitted to arxiv on: 12 Jun 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This end-to-end prototype aims to increase transparency in conversational language models (LLMs) by combining interpretability techniques with user experience design. The study reveals that a prominent open-source LLM has an internal “user model” that can be extracted, providing information on age, gender, education level, and socioeconomic status. A dashboard is designed to display this user model in real-time, allowing users to control the system’s behavior. Users conversed with the instrumented system, appreciating transparency and reporting biased behavior and increased sense of control. Participants made suggestions for future research directions.
Low GrooveSquid.com (original content) Low Difficulty Summary
Conversational language models are like black boxes – we don’t know why they give us certain answers. This is a problem because it can be unfair or misleading. To fix this, researchers created a new system that lets users see inside the model and change how it works. They showed that an open-source model has a “user model” with information about age, gender, education level, and socioeconomic status. A dashboard lets users see this information in real-time and make changes to the model’s behavior. Users liked being able to see what was going on and made suggestions for future improvements.

Keywords

* Artificial intelligence