Summary of Enabling On-device Learning Via Experience Replay with Efficient Dataset Condensation, by Gelei Xu et al.
Enabling On-Device Learning via Experience Replay with Efficient Dataset Condensation
by Gelei Xu, Ningzhi Tang, Jun Xia, Wei Jin, Yiyu Shi
First submitted to arxiv on: 25 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper proposes an on-device framework for further learning from streaming data to improve model accuracy. The challenge lies in extracting representative features from unlabeled, non-i.i.d., and one-time seen data. To address this issue, the authors propose a pseudo-labeling technique for unlabeled on-device learning environments, as well as a dataset condensation method that requires limited computation resources. Additionally, they utilize a contrastive learning objective to improve class purity within the buffer. The results show significant improvements over existing methods, particularly when buffer capacity is restricted. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps machines learn better from new data on devices like smartphones or smart home appliances. It’s hard because this new data isn’t labeled, and it’s only seen once. To make it easier, the authors created a special way to find the most important parts of this new data and store them in a small memory buffer. They also made a technique that helps remove mistakes when labeling these samples. This new method performs much better than previous methods, even when there isn’t much space to store the data. |