Loading Now

Summary of Conformal Prediction with Learned Features, by Shayan Kiyani et al.


Conformal Prediction with Learned Features

by Shayan Kiyani, George Pappas, Hamed Hassani

First submitted to arxiv on: 26 Apr 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper proposes a new framework called Partition Learning Conformal Prediction (PLCP) for conformal prediction with conditional guarantees. Unlike previous work, which relies on predefined uncertainty structures, PLCP learns uncertainty-guided features from calibration data using alternating gradient descent. The authors implement PLCP efficiently and analyze its theoretical properties, providing conditional guarantees for both infinite and finite sample sizes. Experimental results over four datasets show that PLCP outperforms state-of-the-art methods in terms of coverage and length in classification and regression scenarios.
Low GrooveSquid.com (original content) Low Difficulty Summary
PLCP is a new way to make predictions with a guarantee. It’s like having a map that shows you the possible places where your prediction could be, but this time it’s based on learning from data rather than using predefined rules. The authors show how PLCP works efficiently and prove its accuracy for both small and large datasets. They also test PLCP on real-world and made-up data sets and find that it performs better than other methods in making accurate predictions.

Keywords

» Artificial intelligence  » Classification  » Gradient descent  » Regression