Loading Now

Summary of Multi-criteria Comparison As a Method Of Advancing Knowledge-guided Machine Learning, by Jason L. Harman and Jaelle Scheuerman


Multi-Criteria Comparison as a Method of Advancing Knowledge-Guided Machine Learning

by Jason L. Harman, Jaelle Scheuerman

First submitted to arxiv on: 18 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel method for evaluating AI/ML models, which can be applied to assess various criteria including scientific principles and practical outcomes. The approach emerges from prediction competitions in Psychology and Decision Science, where it was used to evaluate multiple candidate models of varying types and structures. The evaluation process involves ordinal ranking of criteria scores using voting rules from computational social choice, enabling the comparison of diverse models and measures. This holistic evaluation method has additional advantages and applications discussed.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us better understand how to test and compare artificial intelligence (AI) and machine learning (ML) models. The authors developed a new way to evaluate these models based on different criteria, like scientific principles and practical outcomes. They tested this approach in competitions with experts from psychology and decision science. This evaluation method can be used to compare many different models and help us understand what works well and what doesn’t.

Keywords

* Artificial intelligence  * Machine learning