Loading Now

Summary of Spikellm: Scaling Up Spiking Neural Network to Large Language Models Via Saliency-based Spiking, by Xingrun Xing et al.


SpikeLLM: Scaling up Spiking Neural Network to Large Language Models via Saliency-based Spiking

by Xingrun Xing, Boyan Gao, Zheng Zhang, David A. Clifton, Shitao Xiao, Li Du, Guoqi Li, Jiajun Zhang

First submitted to arxiv on: 5 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL); Neural and Evolutionary Computing (cs.NE)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The researchers propose a new type of large language model (LLM) that uses bio-plausible spiking mechanisms to mimic the energy-efficient behavior of the human brain. The proposed model, called SpikeLLM, is designed for LLMs with 7-70 billion parameters and aims to reduce the computational resources required for inference. To achieve this, the researchers introduce two new approaches: Generalized Integrate-and-Fire (GIF) neurons that compress spike length, and an Optimal Brain Spiking framework that allocates different timing constants for GIF neurons. The effectiveness of SpikeLLM is demonstrated through comparisons with quantized LLMs, showing improved perplexity and accuracy on specific tasks.
Low GrooveSquid.com (original content) Low Difficulty Summary
The researchers created a new kind of large language model that works like our brains do. They made it more efficient by using special kinds of “spikes” instead of regular calculations. This helps the model use less energy and computing power. They tested their new model, called SpikeLLM, on some big datasets and found that it did better than other models at certain tasks.

Keywords

» Artificial intelligence  » Inference  » Large language model  » Perplexity