Summary of Unraveling and Mitigating Safety Alignment Degradation Of Vision-language Models, by Qin Liu et al.
Unraveling and Mitigating Safety Alignment Degradation of Vision-Language Models
by Qin Liu, Chao Shang, Ling Liu, Nikolaos Pappas, Jie Ma, Neha Anna John, Srikanth Doss, Lluis Marquez, Miguel Ballesteros, Yassine Benajiba
First submitted to arxiv on: 11 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates a phenomenon called “safety alignment degradation” in Vision-Language Models (VLMs), where the integration of vision modules degrades their safety alignment abilities compared to their Language Model (LLM) backbones. The issue arises from a representation gap between text-only and multi-modal inputs, causing the initial safety alignment capabilities developed within textual embeddings to fail when applied to new modalities. To address this challenge, the authors introduce Cross-Modality Representation Manipulation (CMRM), an inference-time intervention method that recovers safety alignment while preserving VLMs’ functional capabilities without additional training. The results show that CMRM significantly reduces unsafe rates in multi-modal inputs, from 61.53% to as low as 3.15%, using the LLaVA-7B model as a case study. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper looks at how well language models can understand and respond safely when given pictures or videos along with text. They found that when these models are used for image-text tasks, they tend to forget what they learned about being safe from their training on just text data. To fix this problem, the researchers developed a new way to adjust the model’s internal representations so it can still be safe and fluent even when processing images or videos. The results show that this method works well, reducing the number of unsafe responses by over 90% without requiring any additional training. |
Keywords
» Artificial intelligence » Alignment » Inference » Language model » Multi modal