Loading Now

Summary of Black-box Forgetting, by Yusuke Kuwana et al.


Black-Box Forgetting

by Yusuke Kuwana, Yuta Goto, Takashi Shibata, Go Irie

First submitted to arxiv on: 1 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Large-scale pre-trained models (PTMs) have shown remarkable zero-shot classification capabilities, covering a wide range of object classes. However, practical applications often require only recognizing specific types of objects, and leaving the model capable of recognizing unnecessary classes degrades overall accuracy and leads to operational disadvantages. To address this issue, researchers explore selective forgetting for PTMs, where the goal is to make the model unable to recognize specified classes while maintaining accuracy for others. Existing methods assume “white-box” settings, where model information is available for training, but in reality, PTMs are often “black-box,” with limited access to information. This paper addresses a novel problem of selective forgetting for black-box models, proposing an approach using derivative-free optimization to optimize input prompts and decrease accuracy for specified classes. The method also introduces Latent Context Sharing, which shares common low-dimensional latent components among tokens in the prompt. Experimental results on four standard benchmark datasets demonstrate the superiority of this method with reasonable baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large-scale pre-trained models can recognize many types of objects without needing to learn about each one separately. However, this ability isn’t always useful and can actually make things worse if we only need to identify a few specific types of objects. To fix this problem, researchers are working on making the model forget how to recognize certain object classes while still being good at recognizing others. This is called selective forgetting. Most methods for doing this assume that you have access to information about the model itself, but in reality, many pre-trained models are “black-box” models where we don’t have access to this kind of information. This paper shows a new way to make black-box models forget certain object classes using an approach called derivative-free optimization.

Keywords

» Artificial intelligence  » Classification  » Optimization  » Prompt  » Zero shot