Summary of Tensor Network Compressibility Of Convolutional Models, by Sukhbinder Singh et al.
Tensor network compressibility of convolutional models
by Sukhbinder Singh, Saeed S. Jahromi, Roman Orus
First submitted to arxiv on: 21 Mar 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG); Quantum Physics (quant-ph)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the impact of “tensorization” on the performance of convolutional neural networks (CNNs) in computer vision tasks. Specifically, it investigates how truncating the convolution kernels of dense CNNs affects their accuracy. The authors find that kernels can often be truncated along several cuts without compromising classification accuracy, suggesting an intrinsic feature of information encoding in dense CNNs. This “correlation compression” enables more effective compression and tensorization of CNN models. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper studies how to shrink the size of convolutional neural networks (CNNs) while keeping their performance in computer vision tasks. It looks at what happens when you shorten or “truncate” the special filters, called kernels, that help CNNs recognize objects. The researchers found that you can often cut these filters shorter without hurting the network’s ability to correctly classify images. This is an important discovery because it means we might be able to make our AI models smaller and faster without sacrificing their accuracy. |
Keywords
* Artificial intelligence * Classification * Cnn