Loading Now

Summary of Learning Run-time Safety Monitors For Machine Learning Components, by Ozan Vardal and Richard Hawkins and Colin Paterson and Chiara Picardi and Daniel Omeiza and Lars Kunze and Ibrahim Habli


Learning Run-time Safety Monitors for Machine Learning Components

by Ozan Vardal, Richard Hawkins, Colin Paterson, Chiara Picardi, Daniel Omeiza, Lars Kunze, Ibrahim Habli

First submitted to arxiv on: 23 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Machine learning models used in autonomous systems must maintain their performance guarantees even when faced with post-deployment changes, such as environmental shifts or system updates. To ensure this, it’s essential to detect when model performance at runtime poses a safety risk. This is particularly challenging when ground truth data is unavailable. Our paper presents a process for creating safety monitors for ML components using degraded datasets and machine learning. We deploy these monitors alongside the ML component to predict the safety risk associated with its output. Our initial experiments using publicly available speed sign datasets demonstrate the feasibility of our approach, which has implications for ensuring the reliability and trustworthiness of autonomous systems.
Low GrooveSquid.com (original content) Low Difficulty Summary
Autonomous systems are getting smarter, but they still need to be super careful not to make mistakes that can hurt people or things. To do this, they need to know when a machine learning model is about to go wrong. This is hard because sometimes we don’t have the correct information to test if the model is working correctly. Our research introduces a new way to create safety checks for these models using old and imperfect data sets and advanced computer science techniques. We’re showing that this approach works by testing it on real-world data from speed limit signs.

Keywords

» Artificial intelligence  » Machine learning