Loading Now

Summary of Certifiably Byzantine-robust Federated Conformal Prediction, by Mintong Kang and Zhen Lin and Jimeng Sun and Cao Xiao and Bo Li


Certifiably Byzantine-Robust Federated Conformal Prediction

by Mintong Kang, Zhen Lin, Jimeng Sun, Cao Xiao, Bo Li

First submitted to arxiv on: 4 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Medium Difficulty Summary: Conformal prediction has been successfully applied to construct statistically rigorous prediction sets for machine learning models with exchangeable data samples. However, recent innovations extending conformal prediction into federated environments have shown vulnerability to Byzantine failures. A malicious subset of clients can significantly compromise coverage guarantees. To address this issue, we introduce Rob-FCP, a novel framework that executes robust federated conformal prediction, effectively countering malicious clients capable of reporting arbitrary statistics. We provide theoretical bounds for Rob-FCP’s conformal coverage in the Byzantine setting and demonstrate its effectiveness against diverse proportions of malicious clients on five benchmark datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
Low Difficulty Summary: This paper talks about a way to make sure that predictions made by machine learning models are accurate and trustworthy, even when data is shared between different groups or organizations. Right now, this process can be vulnerable to attacks from bad actors who might try to manipulate the results. To fix this problem, the authors introduce a new method called Rob-FCP that can detect and prevent these types of attacks. They show that their method works well on real-world data sets and helps ensure that predictions are reliable.

Keywords

» Artificial intelligence  » Machine learning