Summary of Robustness Assessment Of a Runway Object Classifier For Safe Aircraft Taxiing, by Yizhak Elboher et al.
Robustness Assessment of a Runway Object Classifier for Safe Aircraft Taxiing
by Yizhak Elboher, Raya Elsaleh, Omri Isac, Mélanie Ducoffe, Audrey Galametz, Guillaume Povéda, Ryma Boumazouza, Noémie Cohen, Guy Katz
First submitted to arxiv on: 8 Jan 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Machine Learning (cs.LG); Logic in Computer Science (cs.LO)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed case-study paper demonstrates formal verification methods for ensuring the safety-critical application of deep neural networks (DNNs) in aviation. Specifically, the authors assess the robustness of an Airbus-developed image-classifier DNN designed for aircraft taxiing using three common image perturbations: noise, brightness and contrast, and their combinations. To reduce the computational expense of formal verification, a method leveraging monotonicity and past verification results is proposed, achieving nearly 60% reduction in queries. The study indicates that the DNN classifier is more vulnerable to noise than brightness or contrast perturbations. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper shows how deep learning can be used in aviation to make flying safer. It uses a special computer program called formal verification to check if an image-classifier neural network works correctly even when its images are changed in certain ways. The authors test this neural network with different types of image changes, like adding noise or changing brightness and contrast. They find that the network is more likely to make mistakes when there’s noise added to the images. This study helps ensure that DNNs used in aviation can be trusted. |
Keywords
* Artificial intelligence * Deep learning * Neural network