Loading Now

Summary of Overcoming Catastrophic Forgetting in Federated Class-incremental Learning Via Federated Global Twin Generator, by Thinh Nguyen et al.


Overcoming Catastrophic Forgetting in Federated Class-Incremental Learning via Federated Global Twin Generator

by Thinh Nguyen, Khoa D Doan, Binh T. Nguyen, Danh Le-Phuoc, Kok-Seng Wong

First submitted to arxiv on: 13 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This research paper presents Federated Global Twin Generator (FedGTG), a novel framework for Federated Class-Incremental Learning (FCIL) that addresses catastrophic forgetting in decentralized settings. FCIL enables multiple participants to collaboratively train a global model on a sequence of tasks without sharing private data, but conventional algorithms like FedAVG suffer from significant performance declines on earlier tasks. Recent works employ generative models to mitigate this issue, but their testing accuracy on previous classes is still lower than recent classes. To overcome these issues, the authors propose an FCIL framework that exploits privacy-preserving generative-model training on the global side without accessing client data. The server trains a data generator and a feature generator to create synthetic information from all seen classes, which is then sent to clients. Clients use feature-direction-controlling losses to retain knowledge and learn new tasks well. The paper analyzes the robustness of FedGTG on natural images and its ability to converge to flat local minima and achieve better-predicting confidence (calibration). Experimental results on CIFAR-10, CIFAR-100, and tiny-ImageNet demonstrate improvements in accuracy and forgetting measures compared to previous frameworks.
Low GrooveSquid.com (original content) Low Difficulty Summary
FCIL is a way for multiple people or organizations to work together on a shared machine learning model without sharing their private data. This helps keep their information safe while still allowing the model to get better over time. The problem is that this kind of collaboration can make it hard for the model to remember what it learned earlier, which is called catastrophic forgetting. Researchers have been trying to solve this problem by using special types of machine learning models that can generate new data. These models are trained on all the classes or categories of data and then use that training to help the model learn new things without forgetting old ones. The new framework, called Federated Global Twin Generator (FedGTG), uses a different approach that doesn’t require access to individual client data. Instead, it trains two types of generators on the global level: one for generating synthetic data and another for generating features. These generators help the model learn new things without forgetting old ones. The researchers tested FedGTG on three different datasets and found that it performed better than previous approaches.

Keywords

» Artificial intelligence  » Generative model  » Machine learning  » Synthetic data