Summary of Latent Guard: a Safety Framework For Text-to-image Generation, by Runtao Liu et al.
Latent Guard: a Safety Framework for Text-to-image Generation
by Runtao Liu, Ashkan Khakzar, Jindong Gu, Qifeng Chen, Philip Torr, Fabio Pizzati
First submitted to arxiv on: 11 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes Latent Guard, a framework to improve safety measures in text-to-image generation. Existing approaches rely on blacklists or harmful content classification, which are easily circumvented or require large datasets. Latent Guard learns a latent space on top of the T2I model’s text encoder to detect harmful concepts. The framework consists of a data generation pipeline using large language models, ad-hoc components, and a contrastive learning strategy. The effectiveness is verified on three datasets against four baselines. This work aims to prevent misuse of T2I models by generating high-quality images. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper wants to make sure that machines can’t create bad pictures or text. Right now, there are ways to stop this from happening, but they’re not very good. The team came up with a new idea called Latent Guard, which helps keep the machines from making bad things. It works by looking at what’s inside the machine’s brain before it creates something. This way, we can catch any bad ideas before they become real pictures or text. |
Keywords
* Artificial intelligence * Classification * Encoder * Image generation * Latent space