Loading Now

Summary of Relchanet: Neural Network Feature Selection Using Relative Change Scores, by Felix Zimmer


RelChaNet: Neural Network Feature Selection using Relative Change Scores

by Felix Zimmer

First submitted to arxiv on: 3 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel feature selection algorithm called RelChaNet is introduced, which leverages neuron pruning and regrowth in the input layer of a dense neural network to improve interpretability, reduce computational resources, and minimize overfitting. The algorithm uses a gradient sum metric to measure the relative change induced in the network after a feature enters, while neurons are randomly regrown. An extension is proposed that adapts the size of the input layer at runtime. Extensive experiments on nine different datasets show that RelChaNet generally outperforms current state-of-the-art methods, achieving an average accuracy improvement of 2% on the MNIST dataset.
Low GrooveSquid.com (original content) Low Difficulty Summary
RelChaNet is a new way to make neural networks better by choosing the right features. It uses two ideas from sparse neural networks: removing and adding neurons in the input layer. The algorithm measures how much each feature changes the network, then randomly adds or removes neurons based on that measurement. This helps reduce overfitting, makes the network more interpretable, and saves computational resources. The results show that RelChaNet works well on many datasets and even improves the accuracy of some networks by 2%.

Keywords

» Artificial intelligence  » Feature selection  » Neural network  » Overfitting  » Pruning