Loading Now

Summary of Eftvit: Efficient Federated Training Of Vision Transformers with Masked Images on Resource-constrained Edge Devices, by Meihan Wu et al.


EFTViT: Efficient Federated Training of Vision Transformers with Masked Images on Resource-Constrained Edge Devices

by Meihan Wu, Tao Chang, Cui Miao, Jie Zhou, Chun Li, Xiangyu Xu, Ming Li, Xiaodong Wang

First submitted to arxiv on: 30 Nov 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a hierarchical federated framework called EFTViT for efficiently training Vision Transformers (ViTs) on resource-constrained edge devices. The framework leverages masked images to enable full-parameter training, offering benefits for learning on heterogeneous data. It consists of lightweight local modules and a larger global module, updated independently on clients and the central server. The authors analyze the computational complexity and privacy protection of EFTViT and demonstrate its effectiveness on popular benchmarks, achieving up to 28.17% accuracy improvement, reducing computation costs by up to 2.8 times, and cutting training time by up to 4.4 times compared to existing methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
EFTViT is a new way to train Vision Transformers (ViTs) on devices with limited power. This helps when you have lots of data from different sources, like pictures taken by many people. The paper shows how EFTViT works and why it’s better than other methods. It also compares its results to what others have done before.

Keywords

» Artificial intelligence