Summary of Deep Multi-objective Reinforcement Learning For Utility-based Infrastructural Maintenance Optimization, by Jesse Van Remmerden et al.
Deep Multi-Objective Reinforcement Learning for Utility-Based Infrastructural Maintenance Optimization
by Jesse van Remmerden, Maurice Kenter, Diederik M. Roijers, Charalampos Andriotis, Yingqian Zhang, Zaharah Bukhsh
First submitted to arxiv on: 10 Jun 2024
Categories
- Main: Artificial Intelligence (cs.AI)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel multi-objective reinforcement learning method, called Multi-Objective Deep Centralized Multi-Agent Actor-Critic (MO-DCMAC), is proposed for optimizing infrastructural maintenance. This approach can directly optimize a policy for multiple objectives, such as probability of collapse and cost, even when the utility function is non-linear. The method was evaluated using two utility functions, based on threshold and FMECA analysis, in various maintenance environments, including a case study of Amsterdam’s historical quay walls. MO-DCMAC outperformed traditional rule-based policies across different scenarios and utility functions. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary MO-DCMAC is a new way to optimize how we take care of important structures like bridges and buildings. Right now, people use old rules to decide when to fix things, but this can lead to problems if the structure might collapse or cost too much money. The new method can handle multiple goals at once, like keeping the structure safe and saving money. It was tested in different situations, including a real example of how it could help with Amsterdam’s historic waterfront. The results show that MO-DCMAC is better than the old rules in many cases. |
Keywords
» Artificial intelligence » Probability » Reinforcement learning