Summary of A Cost-efficient Fpga Implementation Of Tiny Transformer Model Using Neural Ode, by Ikumi Okubo et al.
A Cost-Efficient FPGA Implementation of Tiny Transformer Model using Neural ODE
by Ikumi Okubo, Keisuke Sugiura, Hiroki Matsutani
First submitted to arxiv on: 5 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Hardware Architecture (cs.AR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This research proposes a lightweight hybrid transformer model that addresses the high training cost and computational complexity of traditional transformers. By replacing ResNet with Neural ODE as the backbone, the proposed model can increase the number of iterations while reusing parameters, reducing the increase in parameter size per iteration. The model is deployed on an FPGA device for edge computing and further quantized using QAT to reduce resource utilization and suppress accuracy loss. The quantized model achieves 79.68% top-1 accuracy on the STL10 dataset. Inference can be executed seamlessly by eliminating memory transfer overhead, leading to accelerated inference. The proposed FPGA implementation accelerates the backbone and MHSA parts by 34.01x and achieves an overall 9.85x speedup. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper proposes a new way to make transformers work better on small devices like smartphones or cameras. Transformers are good at recognizing images, but they can be slow and use a lot of energy. The researchers came up with a solution by changing the way the transformer works and using special computer chips that can do lots of calculations quickly. This makes the image recognition faster and uses less power. The new method is tested on small images and does well, and it could be used for other sizes of images too. |
Keywords
* Artificial intelligence * Inference * Resnet * Transformer