Summary of A Benchmark For Maximum Cut: Towards Standardization Of the Evaluation Of Learned Heuristics For Combinatorial Optimization, by Ankur Nath et al.
A Benchmark for Maximum Cut: Towards Standardization of the Evaluation of Learned Heuristics for Combinatorial Optimization
by Ankur Nath, Alan Kuhnle
First submitted to arxiv on: 14 Jun 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Machine Learning (cs.LG); Optimization and Control (math.OC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed MaxCut-Bench benchmark suite is designed to evaluate the performance of various heuristics for solving the NP-hard Maximum Cut problem in both weighted and unweighted variants. The suite offers a unified interface to traditional and machine learning-based approaches, including S2V-DQN, ECO-DQN, and others. In this study, the authors attempt to systematically reproduce the results of these learning-based approaches using three dimensions: objective value, generalization, and scalability. While some learned heuristics fail to outperform a naive greedy algorithm, one approach consistently outperforms Tabu Search. Furthermore, replacing the GNN in ECO-DQN with a simple linear regression on relevant features improves performance. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper introduces a new benchmark suite for solving the Maximum Cut problem using Graph Neural Networks (GNNs). The authors want to compare different heuristics to solve this hard problem. They create a special dataset and test several methods, including S2V-DQN and ECO-DQN. Some of these methods are better than others, and some even do worse than a simple “guessing” algorithm. The researchers also find that one method works well when it uses a specific type of information. |
Keywords
* Artificial intelligence * Generalization * Gnn * Linear regression * Machine learning