Summary of Self-distillation Learning Based on Temporal-spatial Consistency For Spiking Neural Networks, by Lin Zuo and Yongqi Ding and Mengmeng Jing and Kunshan Yang and Yunqian Yu
Self-Distillation Learning Based on Temporal-Spatial Consistency for Spiking Neural Networks
by Lin Zuo, Yongqi Ding, Mengmeng Jing, Kunshan Yang, Yunqian Yu
First submitted to arxiv on: 12 Jun 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV); Neural and Evolutionary Computing (cs.NE)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A recent study has improved the performance of Spiking Neural Networks (SNNs) using knowledge distillation (KD). However, this approach requires significant computational resources and manual definition of teacher network architectures. To circumvent these concerns, researchers have explored cost-effective self-distillation learning for SNNs. This method involves generating pseudo-labels and learning consistency during training. The authors propose a temporal-spatial self-distillation (TSSD) learning method that does not introduce inference overhead and has excellent generalization ability. They validate the superior performance of TSSD on various datasets, including CIFAR10/100, ImageNet, CIFAR10-DVS, and DVS-Gesture. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary SNNs are special kinds of artificial intelligence that work like our brains. They’re good at using energy efficiently and understanding things in a way that’s easy to interpret. Recently, researchers found a way to make SNNs better by teaching them from other models. However, this requires lots of computer power and needs someone to decide how the teacher model should look. To fix this, scientists are trying to teach SNNs on their own. They do this by giving the SNN some practice problems and seeing if it can solve them correctly. The results show that this new way of teaching is really good at making SNNs smarter. |
Keywords
» Artificial intelligence » Distillation » Generalization » Inference » Knowledge distillation » Teacher model