Loading Now

Summary of Federated Class-incremental Learning with Hierarchical Generative Prototypes, by Riccardo Salami et al.


Federated Class-Incremental Learning with Hierarchical Generative Prototypes

by Riccardo Salami, Pietro Buzzega, Matteo Mosconi, Mattia Verasani, Simone Calderara

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper focuses on Federated Continual Learning (FCL), a subfield of Federated Learning (FL) that enables models to adapt to changing data distributions over time. The authors highlight the importance of Incremental Bias and Federated Bias in FCL, which can cause models to prioritize recently introduced classes or locally predominant classes, respectively. To mitigate these biases, the proposal introduces learnable prompts for finetuning a pre-trained backbone, resulting in clients that produce less biased representations and more biased classifiers. The authors also leverage generative prototypes to balance predictions of the global model, leading to improved accuracy. The proposed method achieves an average increase of +7.8% in accuracy compared to the current State Of The Art, with code available for reproducing the results.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine a world where machines can learn from lots of different devices without sharing their private data. That’s what Federated Learning (FL) is all about! FL helps deep models train better by distributing the computation across many devices while keeping data safe. But, when data distribution changes over time, like in real-world environments, this causes problems. This paper solves one of those problems called Incremental Bias and Federated Bias. It proposes a new way to fine-tune pre-trained models using special prompts, which helps reduce bias and makes predictions more accurate. The result? A +7.8% increase in accuracy compared to the current best method!

Keywords

» Artificial intelligence  » Continual learning  » Federated learning