Loading Now

Summary of When Foresight Pruning Meets Zeroth-order Optimization: Efficient Federated Learning For Low-memory Devices, by Pengyu Zhang et al.


When Foresight Pruning Meets Zeroth-Order Optimization: Efficient Federated Learning for Low-Memory Devices

by Pengyu Zhang, Yingjie Liu, Yingbo Zhou, Xiao Du, Xian Wei, Ting Wang, Mingsong Chen

First submitted to arxiv on: 8 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Distributed, Parallel, and Cluster Computing (cs.DC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper tackles the limitation of Federated Learning (FL) on low-memory Artificial Intelligence of Things (AIoT) devices by proposing a novel federated foresight pruning method based on Neural Tangent Kernel (NTK). The existing federated pruning methods struggle to mitigate memory burdens during training and inference. In contrast, this approach seamlessly integrates with federated backpropagation-free (BP-Free) training frameworks, reducing floating point operations (FLOPs) while preserving performance. By leveraging local NTK matrices, the method approximates federated NTK computation, allowing it to scale up and alleviate memory pressure during training and inference. The paper presents comprehensive experimental results from simulation- and real test-bed-based platforms, demonstrating a memory reduction of up to 9x for dense models and improved performance with dramatically fewer FLOPs for the vanilla BP-Free method.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making Artificial Intelligence (AI) work better on devices that don’t have much memory. Currently, AI can learn together in groups using Federated Learning, but this doesn’t work well on low-memory devices because it uses too much memory. To solve this problem, researchers have proposed different ways to reduce the amount of memory needed during learning. However, these methods aren’t perfect and still use too much memory or computation. The authors of this paper propose a new way to reduce memory usage that works well with low-memory devices and improves performance while using fewer calculations.

Keywords

» Artificial intelligence  » Backpropagation  » Federated learning  » Inference  » Pruning