Summary of Multi-objective Evolution Of Heuristic Using Large Language Model, by Shunyu Yao et al.
Multi-objective Evolution of Heuristic Using Large Language Model
by Shunyu Yao, Fei Liu, Xi Lin, Zhichao Lu, Zhenkun Wang, Qingfu Zhang
First submitted to arxiv on: 25 Sep 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: None
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A machine learning framework for automatic heuristic search is proposed, leveraging Large Language Models (LLMs) to generate effective heuristics for various optimization problems. The framework, Multi-objective Evolution of Heuristic (MEoH), considers not only optimal performance but also efficiency and scalability, which are crucial in practice. MEoH integrates LLMs in a zero-shot manner to generate a non-dominated set of heuristics that meet multiple design criteria. The framework is demonstrated on two well-known combinatorial optimization problems: the online Bin Packing Problem (BPP) and the Traveling Salesman Problem (TSP). The results show that MEoH successfully generates elite heuristics, offering more trade-off options than existing methods, while achieving competitive or superior performance and improving efficiency up to 10 times. This framework can lead to novel insights into heuristic design and discover diverse heuristics. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to create helpful rules for solving optimization problems. These rules are like recipes that computers use to find the best solution. The problem is that these rules often need to be made by hand, which is time-consuming. Researchers have been trying to use big language models (LLMs) to help make these rules automatically. But so far, they’ve only focused on making sure the rule works well for one specific problem. In this paper, the authors propose a new way of creating rules that takes into account multiple criteria, not just how well it works. This allows computers to generate many different rules that work well in different situations. The authors tested their method on two famous optimization problems and found that it was able to create high-quality rules quickly. |
Keywords
» Artificial intelligence » Machine learning » Optimization » Zero shot