Loading Now

Summary of Efficient Speech Command Recognition Leveraging Spiking Neural Network and Curriculum Learning-based Knowledge Distillation, by Jiaqi Wang et al.


Efficient Speech Command Recognition Leveraging Spiking Neural Network and Curriculum Learning-based Knowledge Distillation

by Jiaqi Wang, Liutao Yu, Liwei Huang, Chenlin Zhou, Han Zhang, Zhenxi Song, Min Zhang, Zhengyu Ma, Zhiguo Zhang

First submitted to arxiv on: 17 Dec 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract discusses the advantages of spiking neural networks (SNNs) in processing temporal information due to their embedded time sequences as time steps. Recent studies have demonstrated SNNs’ effectiveness in speech command recognition, but large time steps lead to increased deployment burdens for edge computing applications. To balance high performance and low energy consumption, the authors propose a fully spike-driven framework called SpikeSCR, which exhibits long-term learning capabilities with extended time steps. Additionally, an effective knowledge distillation method based on curriculum learning (KDCL) is proposed to further reduce power efficiency while maintaining high performance. The method is evaluated on three benchmark datasets, demonstrating that SpikeSCR outperforms current state-of-the-art methods across these datasets with the same time steps. Furthermore, KDCL reduces the number of time steps by 60% and energy consumption by 54.8% while maintaining comparable performance to recent SOTA results.
Low GrooveSquid.com (original content) Low Difficulty Summary
Spike neural networks are great at processing information that changes over time because they naturally use time sequences as “time steps”. This helps them recognize speech commands really well, but it can make them harder to use in devices like smartphones where battery life is important. The authors came up with a new way to build these networks called SpikeSCR that uses a combination of global and local structures to learn information quickly. They also developed a way to “distill” knowledge from one network to another, which helps reduce the energy needed to run the network while still getting good results.

Keywords

* Artificial intelligence  * Curriculum learning  * Knowledge distillation