Summary of A Few-shot Label Unlearning in Vertical Federated Learning, by Hanlin Gu et al.
A few-shot Label Unlearning in Vertical Federated Learning
by Hanlin Gu, Hong Xi Tae, Chee Seng Chan, Lixin Fan
First submitted to arxiv on: 14 Oct 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Cryptography and Security (cs.CR); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses the challenge of label unlearning in Vertical Federated Learning (VFL), an area that has received limited attention compared to horizontal federated learning. The authors introduce a novel approach specifically designed to tackle label unlearning in VFL, focusing on scenarios where the active party aims to mitigate the risk of label leakage. Their method combines manifold mixup with gradient ascent to augment and erase label information from models. This combination enables high unlearning effectiveness while maintaining efficiency, completing the unlearning procedure within seconds. The authors validate their approach through extensive experiments conducted on diverse datasets, including MNIST, CIFAR10, CIFAR100, and ModelNet. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper solves a big problem in how computers learn together. Right now, when we’re learning new things from lots of different sources, there’s a risk that some of the information gets leaked and becomes spoiled. The authors came up with a way to fix this by making sure the models used for learning don’t remember any unwanted information. They did this by using special techniques to mix up the data and then get rid of the unwanted labels. This worked really well and was fast, taking only seconds. They tested it on lots of different types of data and showed that it works. |
Keywords
» Artificial intelligence » Attention » Federated learning