Loading Now

Summary of From Silos to Systems: Process-oriented Hazard Analysis For Ai Systems, by Shalaleh Rismani et al.


From Silos to Systems: Process-Oriented Hazard Analysis for AI Systems

by Shalaleh Rismani, Roel Dobbe, AJung Moon

First submitted to arxiv on: 29 Oct 2024

Categories

  • Main: Artificial Intelligence (cs.AI)
  • Secondary: Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
As AI systems become increasingly prevalent, it’s crucial to identify and mitigate potential harms. Current approaches focus on individual components in isolation, overlooking system-level hazards. This paper draws from the established field of system safety, considering safety as an emergent property of the entire system. The authors translate System Theoretic Process Analysis (STPA) for analyzing AI operation and development processes, focusing on systems relying on machine learning algorithms. They conduct STPA on three case studies involving linear regression, reinforcement learning, and transformer-based generative models. The analysis explores how control and system-theoretic perspectives apply to AI systems and whether unique AI traits require modifications to the framework. The authors find that STPA’s key concepts and steps readily apply with adaptations for AI, introducing Process-oriented Hazard Analysis for AI Systems (PHASE) as a guideline. PHASE enables analysts to detect hazards at the systems level, acknowledge social factors contributing to algorithmic harms, create traceable accountability chains, and monitor ongoing hazards.
Low GrooveSquid.com (original content) Low Difficulty Summary
AI systems can cause harm if not designed or used correctly. This paper looks at how to make AI safer by using a framework called System Theoretic Process Analysis (STPA). STPA is usually used for physical systems, but the authors adapted it for AI. They tested STPA on three different types of AI models: linear regression, reinforcement learning, and transformer-based generative models. The results showed that STPA can be useful for identifying potential problems with AI systems. The paper also introduces a new approach called PHASE (Process-oriented Hazard Analysis for AI Systems) that helps analysts identify and fix safety issues in AI.

Keywords

» Artificial intelligence  » Linear regression  » Machine learning  » Reinforcement learning  » Transformer