Loading Now

Summary of Benchmarking Sensitivity Of Continual Graph Learning For Skeleton-based Action Recognition, by Wei Wei et al.


Benchmarking Sensitivity of Continual Graph Learning for Skeleton-Based Action Recognition

by Wei Wei, Tom De Schepper, Kevin Mets

First submitted to arxiv on: 31 Jan 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty summary: This research paper focuses on continual learning (CL) for graph neural networks (GNNs). The authors explore the effects of pre-training GNNs, which can lead to negative transfer after fine-tuning. To address this issue, they introduce a benchmark for continual graph learning (CGL) using spatio-temporal graphs and evaluate well-known CGL methods in this setting. The proposed benchmark is based on two datasets for skeleton-based action recognition: N-UCLA and NTU-RGB+D. Additionally, the authors investigate the sensitivity of CGL methods to task order and class order, revealing that robust methods can still be sensitive to class order. Their findings contradict previous empirical observations on architectural sensitivity in CL.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty summary: This research is about making machine learning models smarter by allowing them to learn new things without being retrained from scratch. The authors want to make sure these models don’t forget what they already know when learning something new. They created a test to see how well different models do at this task using special kinds of data called spatio-temporal graphs. They found that some models are better than others at learning new things without forgetting old ones, and that the order in which they learn new things matters.

Keywords

* Artificial intelligence  * Continual learning  * Fine tuning  * Machine learning