Loading Now

Summary of Verification-aided Learning Of Neural Network Barrier Functions with Termination Guarantees, by Shaoru Chen et al.


Verification-Aided Learning of Neural Network Barrier Functions with Termination Guarantees

by Shaoru Chen, Lekan Molu, Mahyar Fazlyab

First submitted to arxiv on: 12 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Systems and Control (eess.SY)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed paper addresses the challenges in automating barrier function synthesis for safety guarantees in systems. It builds upon recent approaches that utilize self-supervised learning techniques to learn these functions, but introduces a holistic approach with finite-step termination guarantees. The framework involves first learning an empirically well-behaved neural network (NN) basis function and then applying a fine-tuning algorithm that exploits convexity and counterexamples from verification failures to find a valid barrier function. This approach significantly boosts the performance of previous methods on various neural network verifiers.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to learn safety guarantees for systems. It’s like trying to solve a puzzle, but instead of pieces, you’re using special functions called “barrier functions”. These functions help keep systems safe by preventing bad things from happening. The problem is that it’s hard to find these barrier functions, so the paper introduces a new way to do this using something called “self-supervised learning”. This method helps learn the barrier functions, but sometimes it doesn’t work and you need to try again. To fix this, the paper suggests a two-step process: first, learn a good starting point for the barrier function, and then use feedback from when it fails to make it better. This approach is more reliable and efficient than previous methods.

Keywords

* Artificial intelligence  * Fine tuning  * Neural network  * Self supervised