Loading Now

Summary of Heterogeneous Federated Learning with Convolutional and Spiking Neural Networks, by Yingchao Yu et al.


Heterogeneous Federated Learning with Convolutional and Spiking Neural Networks

by Yingchao Yu, Yuping Yan, Jisong Cai, Yaochu Jin

First submitted to arxiv on: 14 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
In this paper, researchers tackle the challenge of federated learning (FL) when different edge devices use distinct types of artificial intelligence models. Currently, most FL systems assume all models are alike, but in reality, devices may employ both conventional artificial neural networks (ANNs) and biologically-inspired spiking neural networks (SNNs). This diversity enables efficient task handling and showcases the adaptability of edge computing platforms. The main hurdle is aggregating local models in a privacy-preserving manner. To address this, the authors compare various aggregation approaches for combining convolutional neural networks (CNNs), SNNs, and their fusion using FL systems containing both CNNs and SNNs. Experimental results show that the CNN-SNN fusion framework performs best on the MNIST dataset, while also observing competitive suppression phenomena during convergence.
Low GrooveSquid.com (original content) Low Difficulty Summary
Federated learning is a way to train AI models on lots of devices without sharing private data. Right now, most systems assume all devices use the same type of model, but what if they don’t? What if some devices use one kind of model and others use another? This paper explores this idea by comparing different ways to combine these models in a way that keeps their data safe. They test these combinations on a dataset called MNIST and find that one combination works better than the others. It’s like a puzzle, where all the pieces fit together just right!

Keywords

* Artificial intelligence  * Cnn  * Federated learning