Loading Now

Summary of Plpp: Prompt Learning with Perplexity Is Self-distillation For Vision-language Models, by Biao Liu et al.


PLPP: Prompt Learning with Perplexity Is Self-Distillation for Vision-Language Models

by Biao Liu, Wenyi Fang, Xiaoyu Wu, Yang Zheng, Zheng Hu, Bo Yuan

First submitted to arxiv on: 18 Dec 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: Pre-trained Vision-Language (VL) models, specifically CLIP, have shown excellent results across various downstream tasks. Context Optimization (CoOp) further improves VL model performance by introducing prompt learning, which optimizes a set of learnable vectors and freezes the whole CLIP model. However, solely relying on the CLIP loss to fine-tune prompts can lead to models prone to overfitting. To address this issue, we propose PLPP (Prompt Learning with PerPlexity), a plug-in prompt-regularization method that uses perplexity loss to regularize prompt learning. PLPP consists of two steps: calculating cosine similarity between the weight of the embedding layer and prompts, and introducing a language model head that outputs word probability distribution without training. The experiments conducted on four classification tasks demonstrate that PLPP exhibits superior performance compared to existing methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This research paper is about improving how pre-trained Vision-Language models work on new tasks. These models are very good at many things, but sometimes they get too good and can’t adapt well to new challenges. The researchers propose a way to fix this problem by introducing “prompts” that help the model learn more effectively. They also introduce a method called PLPP (Prompt Learning with PerPlexity) that helps prevent the model from getting too specialized. By using this new approach, the researchers were able to achieve better results on four different classification tasks.

Keywords

» Artificial intelligence  » Classification  » Cosine similarity  » Embedding  » Language model  » Optimization  » Overfitting  » Perplexity  » Probability  » Prompt  » Regularization