Summary of On Building Myopic Mpc Policies Using Supervised Learning, by Christopher A. Orrico et al.
On Building Myopic MPC Policies using Supervised Learning
by Christopher A. Orrico, Bokan Yang, Dinesh Krishnamoorthy
First submitted to arxiv on: 23 Jan 2024
Categories
- Main: Machine Learning (cs.LG)
- Secondary: Systems and Control (eess.SY); Optimization and Control (math.OC)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary In this paper, researchers explore an alternative approach to approximate explicit model predictive control (MPC) that leverages supervised learning techniques to learn the optimal value function offline. By using pre-trained neural networks as cost-to-go functions in myopic MPC with short prediction horizons, they reduce online computation burdens without sacrificing controller performance. The approach differs from existing work by utilizing state-value pairs collected offline for training, rather than closed-loop performance data. A sensitivity-based data augmentation scheme is proposed to address the cost of generating these state-value pairs. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper takes a different route in model predictive control (MPC). Instead of learning the MPC policy, researchers learn the optimal value function using supervised learning techniques offline. This value function is then used as the cost-to-go function in myopic MPC with short prediction horizons. This approach reduces online computation burdens without affecting controller performance. The research focuses on utilizing state-value pairs collected offline for training, unlike previous work that used closed-loop data. |
Keywords
* Artificial intelligence * Data augmentation * Supervised