Loading Now

Summary of Optimal Parameter and Neuron Pruning For Out-of-distribution Detection, by Chao Chen et al.


Optimal Parameter and Neuron Pruning for Out-of-Distribution Detection

by Chao Chen, Zhihang Fu, Kai Liu, Ze Chen, Mingyuan Tao, Jieping Ye

First submitted to arxiv on: 4 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes an Optimal Parameter and Neuron Pruning (OPNP) approach to detect out-of-distribution (OOD) samples for machine learning models deployed in real-world scenarios. The method aims to identify and remove parameters and neurons that lead to over-fitting, using a training-free approach that utilizes prior information from the training data. The OPNP consists of two steps: evaluating parameter and neuron sensitivities by averaging gradients over all training samples, and removing those with exceptionally large or close to zero sensitivities for prediction. Experiments on multiple OOD detection tasks and model architectures show that OPNP consistently outperforms existing methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper suggests a new way to help machines understand when they’re seeing something they haven’t seen before. This is important because machines often get overconfident about their predictions, which can be bad news. The method, called Optimal Parameter and Neuron Pruning (OPNP), tries to figure out which parts of the machine’s brain are causing it to be overconfident. It does this by looking at how different parts of the brain respond to different things. Then, it removes the parts that cause the most problems. This approach is good because it doesn’t require a lot of extra training data and can work with other methods already in use.

Keywords

* Artificial intelligence  * Machine learning  * Pruning