Loading Now

Summary of Measuring and Controlling Instruction (in)stability in Language Model Dialogs, by Kenneth Li et al.


Measuring and Controlling Instruction (In)Stability in Language Model Dialogs

by Kenneth Li, Tianle Liu, Naomi Bashkansky, David Bau, Fernanda Viégas, Hanspeter Pfister, Martin Wattenberg

First submitted to arxiv on: 13 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
System-prompting is a crucial tool for customizing language-model chatbots, allowing them to follow specific instructions. A common assumption is that these prompts remain stable throughout conversations, enabling the chatbot to generate text consistently. However, our research reveals a significant issue: instruction drift, where popular models like LLaMA2-chat-70B and GPT-3.5 lose their initial direction within just eight rounds of self-chats between two instructed chatbots. We investigate this phenomenon, attributing it to the transformer attention mechanism’s decay over long exchanges. To address this challenge, we propose a lightweight method called split-softmax, which outperforms strong baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
Chatbots are like super-smart AI friends that can talk with us! They need special instructions, or “system-prompting,” to understand what we want them to say next. For example, you might ask a chatbot to tell you a joke, and then another joke, and so on. But did you know that these prompts can change over time? That’s called instruction drift, and it happens even with really smart models like LLaMA2-chat-70B and GPT-3.5! Our research shows that this drift occurs after just a few “conversations” between two chatbots following the same instructions. We think this is because these AI models get tired of paying attention to everything after a while, so we’re working on ways to help them stay focused.

Keywords

* Artificial intelligence  * Attention  * Gpt  * Language model  * Prompting  * Softmax  * Transformer