Loading Now

Summary of Non-linear Inference Time Intervention: Improving Llm Truthfulness, by Jakub Hoscilowicz et al.


Non-Linear Inference Time Intervention: Improving LLM Truthfulness

by Jakub Hoscilowicz, Adam Wiacek, Jan Chojnacki, Adam Cieslak, Leszek Michon, Vitalii Urbanevych, Artur Janicki

First submitted to arxiv on: 27 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A medium-difficulty summary: This paper delves into the internal representation space of Large Language Models (LLMs) to pinpoint attention heads conveying accurate information. The Inference Time Intervention (ITI) framework is developed, allowing bias modification without fine-tuning. A non-linear multi-token probing and intervention, Non-Linear ITI (NL-ITI), boosts performance on evaluation benchmarks. NL-ITI is tested on various multiple-choice datasets, including TruthfulQA, yielding a 16% relative MC1 accuracy improvement compared to the baseline ITI results. Additionally, a 10% relative improvement is achieved over the Truth Forest (TrFf) method.
Low GrooveSquid.com (original content) Low Difficulty Summary
A low-difficulty summary: This research explores what Large Language Models really know and how they can be improved. The team developed a new way to make these models more accurate without needing lots of extra training. They tested this new approach on various questions and found that it works much better than before, especially for tricky ones. Overall, this breakthrough could lead to even smarter language models in the future.

Keywords

* Artificial intelligence  * Attention  * Fine tuning  * Inference  * Token