Loading Now

Summary of Pruning For Protection: Increasing Jailbreak Resistance in Aligned Llms Without Fine-tuning, by Adib Hasan et al.


Pruning for Protection: Increasing Jailbreak Resistance in Aligned LLMs Without Fine-Tuning

by Adib Hasan, Ileana Rugina, Alex Wang

First submitted to arxiv on: 19 Jan 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL); Cryptography and Security (cs.CR)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores how compressing Large Language Models (LLMs) affects their ability to resist “jailbreaking” attacks. The researchers find that moderate pruning can improve resistance without retraining the models, while maintaining performance on standard benchmarks. To evaluate this, they create a dataset of 225 harmful tasks across five categories and analyze three LLMs: LLaMA-2 Chat, Vicuna 1.3, and Mistral Instruct v0.2. The results show that pruning correlates with initial model safety levels, changing attention patterns and perplexity shifts. This leads to sharper attention and increased sensitivity to artificial jailbreak constructs.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how making Large Language Models smaller affects their ability to be “jailbroken”. It finds that making the models a bit smaller can make them harder to hack without retraining, while keeping them good on normal tests. To see if this works, they made a list of 225 bad things to do with language models and tested three models: LLaMA-2 Chat, Vicuna 1.3, and Mistral Instruct v0.2.

Keywords

* Artificial intelligence  * Attention  * Llama  * Perplexity  * Pruning