Summary of Learning on Model Weights Using Tree Experts, by Eliahu Horwitz et al.
Learning on Model Weights using Tree Experts
by Eliahu Horwitz, Bar Cavia, Jonathan Kahana, Yedid Hoshen
First submitted to arxiv on: 17 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the possibility of training neural networks that utilize other networks as input, allowing for the study of different aspects of a given network. The authors identify a key property of real-world models: most public models belong to a small set of Model Trees, where all models within a tree are fine-tuned from a common ancestor. This property is leveraged to develop Probing Experts (ProbeX), a lightweight and theoretically motivated method for learning from the weights of a single hidden model layer. ProbeX is demonstrated to be effective in predicting the categories in a model’s training dataset based only on its weights, as well as mapping the weights of Stable Diffusion into a shared weight-language embedding space for zero-shot model classification. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how neural networks can use other networks as input. This helps us learn more about different aspects of a network. They find that most real-world models come from a few “Model Trees”, where all the models in one tree are similar because they came from the same starting point. Using this, they create Probing Experts (ProbeX) which is a simple and smart way to learn from just one layer of another model’s weights. This helps us figure out what categories a model was trained on, and even map the weights of other models into a shared space. |
Keywords
» Artificial intelligence » Classification » Diffusion » Embedding space » Zero shot