Summary of On Robustness and Generalization Of Ml-based Congestion Predictors to Valid and Imperceptible Perturbations, by Chester Holtz et al.
On Robustness and Generalization of ML-Based Congestion Predictors to Valid and Imperceptible Perturbations
by Chester Holtz, Yucheng Wang, Chung-Kuan Cheng, Bill Lin
First submitted to arxiv on: 29 Feb 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Hardware Architecture (cs.AR)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper investigates the robustness of machine learning (ML)-based electronic computer-aided design (CAD) tools, specifically focusing on congestion prediction. While deep learning methods have achieved impressive results, recent research has shown that neural networks are vulnerable to small input perturbations. The authors explore this concept in the context of ML-based EDA, proposing a novel approach to improve robustness and achieve better performance. They evaluate their method using various benchmarks, demonstrating its effectiveness in predicting congestion. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about making computer-aided design (CAD) tools more reliable. Right now, these tools use machine learning (ML) to predict things like how congested a circuit will be. But what if someone intentionally changes a tiny part of the input? The current ML methods can’t handle this and fail. The authors are trying to fix this by making their method more robust, so it works even when small changes are made. |
Keywords
* Artificial intelligence * Deep learning * Machine learning