Summary of A Survey on Contribution Evaluation in Vertical Federated Learning, by Yue Cui et al.
A Survey on Contribution Evaluation in Vertical Federated Learning
by Yue Cui, Chung-ju Huang, Yuzhu Zhang, Leye Wang, Lixin Fan, Xiaofang Zhou, Qiang Yang
First submitted to arxiv on: 3 May 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Distributed, Parallel, and Cluster Computing (cs.DC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary Vertical Federated Learning (VFL) is a crucial approach to address privacy concerns in machine learning. VFL enables entities with distinct feature sets to collaborate on the same user population, training predictive models without direct data sharing. To maintain trust and ensure equitable resource sharing, evaluating each entity’s contribution is essential. This paper provides a comprehensive review of contribution evaluation techniques in VFL, categorizing methods by lifecycle, granularity, privacy considerations, and computational methods. The authors explore various tasks involving contribution evaluation, analyzing required properties and relation to VFL phases. They also present future challenges and provide a vision for advancing contribution evaluation in VFL. This paper aims to guide researchers and practitioners in designing and implementing more effective, efficient, and privacy-centric VFL solutions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about how people can work together on machine learning projects without sharing their data. They use something called Vertical Federated Learning (VFL). It’s like a team project where everyone has different information, but they all want to create one big model. The problem is that nobody wants to share their secrets, so they have to figure out how to contribute to the project without giving away too much. The authors of this paper looked at lots of ways people are doing this and tried to understand what’s working and what’s not. They also talked about some challenges ahead and want to help people make better VFL projects in the future. |
Keywords
» Artificial intelligence » Federated learning » Machine learning