Loading Now

Summary of Training the Untrainable: Introducing Inductive Bias Via Representational Alignment, by Vighnesh Subramaniam et al.


Training the Untrainable: Introducing Inductive Bias via Representational Alignment

by Vighnesh Subramaniam, David Mayo, Colin Conwell, Tomaso Poggio, Boris Katz, Brian Cheung, Andrei Barbu

First submitted to arxiv on: 26 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper demonstrates that architectures traditionally considered unsuitable for a task can be trained using inductive biases from another architecture. The authors introduce guidance, where a guide network guides a target network using a neural distance function. The target is optimized to perform well and match its internal representations to those of the guide. This allows investigators to explore the types of priors different architectures place on untrainable networks. The paper shows that this method can overcome overfitting in fully connected networks, make plain CNNs competitive with ResNets, close the gap between vanilla RNNs and Transformers, and even help Transformers learn tasks easier for RNNs. The authors also discover evidence of better initializations for fully connected networks to avoid overfitting.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper shows that old computer architectures can be trained using ideas from newer ones. Instead of changing the architecture completely, you can use an “expert” network to guide a less powerful one. This helps the less powerful network perform well and match its internal representations to those of the expert. The authors tested this method on different types of networks and found that it can make old architectures competitive with new ones. They also discovered that there might be better ways to start training fully connected networks so they don’t overfit.

Keywords

» Artificial intelligence  » Overfitting