Summary of Harmaug: Effective Data Augmentation For Knowledge Distillation Of Safety Guard Models, by Seanie Lee et al.
HarmAug: Effective Data Augmentation for Knowledge Distillation of Safety Guard Models
by Seanie Lee, Haebin Seong, Dong Bok Lee, Minki Kang, Xiaoyin Chen, Dominik Wagner, Yoshua Bengio, Juho Lee, Sung Ju Hwang
First submitted to arxiv on: 2 Oct 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A newly proposed approach for ensuring the security of large language models (LLMs) is presented. The method involves distilling a large teacher safety guard model into a smaller one using labeled data and then augmenting this dataset with harmful instructions generated by the LLM itself. This allows for the creation of a smaller, yet effective safety guard model that can be deployed on mobile devices without significant memory or latency requirements. The proposed approach outperforms existing baselines and achieves comparable results to larger models at a fraction of the computational cost. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary Large language models are becoming increasingly important in many areas of life, but they also need to be kept safe from malicious queries. To do this, we need to create safety guard models that can detect these bad queries. However, making these models work on mobile devices is tricky because they require a lot of memory and time. In this research, scientists found a way to shrink a large teacher model into a smaller one using labeled data. They also came up with a new way to add more “bad” examples to the training data by asking another LLM to come up with bad prompts. This makes the safety guard model better at detecting malicious queries and it does this without needing as much memory or time. The results are very promising! |
Keywords
» Artificial intelligence » Teacher model