Loading Now

Summary of Robust Support Vector Machines Via Conic Optimization, by Valentina Cepeda et al.


Robust support vector machines via conic optimization

by Valentina Cepeda, Andrés Gómez, Shaoning Han

First submitted to arxiv on: 2 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC); Computation (stat.CO)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel approach to learning support vector machines (SVMs) robust to uncertainty is presented, addressing the limitations of traditional loss functions like the hinge loss. By using mixed-integer optimization techniques, a new loss function is derived that approximates the 0-1 loss while preserving convexity. This proposed estimator outperforms standard SVMs with the hinge loss in both outlier-free and outlier-rich regimes.
Low GrooveSquid.com (original content) Low Difficulty Summary
We’re going to learn how to make support vector machines (SVMs) more reliable when there’s uncertainty or mistakes in the data. Right now, many loss functions used for SVMs are sensitive to these errors, which can make them perform poorly. Some methods try to use a 0-1 loss function or approximations, but they’re often too complicated and slow. This paper introduces a new way to find a good balance between being robust and efficient using mixed-integer optimization techniques. The results show that this approach is competitive with the usual SVMs in normal situations and even better when there are outliers.

Keywords

* Artificial intelligence  * Hinge loss  * Loss function  * Optimization