Loading Now

Summary of From Displacements to Distributions: a Machine-learning Enabled Framework For Quantifying Uncertainties in Parameters Of Computational Models, by Taylor Roper and Harri Hakula and Troy Butler


From Displacements to Distributions: A Machine-Learning Enabled Framework for Quantifying Uncertainties in Parameters of Computational Models

by Taylor Roper, Harri Hakula, Troy Butler

First submitted to arxiv on: 4 Mar 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper presents novel extensions for combining two frameworks to quantify both aleatoric (irreducible) and epistemic (reducible) sources of uncertainties in engineered systems. The data-consistent (DC) framework uses an inverse problem and solution to quantify aleatoric uncertainties, while the Learning Uncertain Quantities (LUQ) framework defines a machine-learning enabled process for transforming noisy datasets into samples of a learned Quantity of Interest (QoI) map. LUQ incorporates a robust filtering step that learns the most useful information from spatio-temporal datasets, and iterates over time as new data is obtained. The paper also develops a DC-based inversion scheme that uses quantitative diagnostics to evaluate the quality and impact of inversion at each iteration.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us understand how to better predict and manage uncertainty in complex systems like machines or buildings. It combines two approaches to figure out both random (aleatoric) and learnable (epistemic) types of uncertainty. One approach, called DC, tries to solve an inverse problem to quantify the random uncertainty. The other approach, LUQ, uses machine learning to transform noisy data into useful information. This paper also shows how to use these approaches together to improve our predictions over time.

Keywords

* Artificial intelligence  * Machine learning