Loading Now

Summary of Multi-layer Random Features and the Approximation Power Of Neural Networks, by Rustem Takhanov


Multi-layer random features and the approximation power of neural networks

by Rustem Takhanov

First submitted to arxiv on: 26 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel neural architecture is proposed, where randomly initialized weights in the infinite width limit converge to a Gaussian Random Field (GRF) whose covariance function is the Neural Network Gaussian Process kernel (NNGP). The authors prove that a reproducing kernel Hilbert space (RKHS) defined by NNGP contains only functions that can be approximated by this architecture. To achieve a certain approximation error, the required number of neurons in each layer is determined by the RKHS norm of the target function. Additionally, the authors demonstrate how to construct an approximation from a supervised dataset using random multi-layer representations of input vectors and training the last layer’s weights.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new kind of neural network is studied. This special kind of network can be thought of as a type of mathematical field that can get very close to any target function with enough “neurons” (small parts of the network). The researchers show how many neurons are needed to get close to a specific target function, and they also explain how this network can be trained using data. This work is important for developing new ways to use neural networks.

Keywords

» Artificial intelligence  » Neural network  » Supervised