Loading Now

Summary of Efficiera Residual Networks: Hardware-friendly Fully Binary Weight with 2-bit Activation Model Achieves Practical Imagenet Accuracy, by Shuntaro Takahashi and Takuya Wakisaka and Hiroyuki Tokunaga


Efficiera Residual Networks: Hardware-Friendly Fully Binary Weight with 2-bit Activation Model Achieves Practical ImageNet Accuracy

by Shuntaro Takahashi, Takuya Wakisaka, Hiroyuki Tokunaga

First submitted to arxiv on: 15 Oct 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract introduces Efficiera Residual Networks (ERNs), a deep neural network model optimized for low-resource edge devices. To address severe resource limitations, ERNs achieve full ultra-low-bit quantization, using binary weights and 2-bit activations. A shared constant scaling factor technique enables integer-valued computation in residual connections, allowing float values only until the final convolution layer. The paper demonstrates competitiveness by achieving an ImageNet top-1 accuracy of 72.5pt with a ResNet50-compatible architecture and 63.6pt with a model size less than 1MB. ERNs also exhibit impressive inference times, reaching 300FPS on a cost-efficient FPGA device.
Low GrooveSquid.com (original content) Low Difficulty Summary
ERNs are a new kind of deep neural network that’s perfect for devices like smartphones or smart home cameras. These devices don’t have enough power to run big models, so we had to make some changes. We used something called ultra-low-bit quantization to make the model smaller and use less energy. This lets our model work on those tiny devices without getting too hot or using up all their battery life. We also made a special trick to help our model do math with whole numbers instead of decimals, which helps it run even faster! Our ERNs can recognize pictures just as well as other models, but they’re much smaller and use less energy. That’s good news for devices that need to be power-efficient.

Keywords

» Artificial intelligence  » Inference  » Neural network  » Quantization