Summary of Efficient Neural Compression with Inference-time Decoding, by C. Metz et al.
Efficient Neural Compression with Inference-time Decoding
by C. Metz, O. Bichler, A. Dupret
First submitted to arxiv on: 10 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper investigates the combination of neural network quantization and entropy coding to reduce memory usage. By combining mixed precision quantization with zero-point quantization and entropy coding, the authors demonstrate an accuracy drop below 1% on the ImageNet benchmark while pushing the compression boundary beyond the 1-bit frontier. This approach is implemented using a compact decoder architecture that reduces latency for inference-compatible decoding. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper combines two techniques to make neural networks use less memory: quantization and entropy coding. Quantizing neural networks makes them take up less space, but it can also make them work worse if the bitwidth is too low. The authors find a way to get around this by using mixed precision quantization, which lets them choose how many bits to use for each part of the network. They then add zero-point quantization and entropy coding to get even better compression. This means they can make neural networks work just as well while taking up much less space. |
Keywords
» Artificial intelligence » Decoder » Inference » Neural network » Precision » Quantization