Loading Now

Summary of Between Randomness and Arbitrariness: Some Lessons For Reliable Machine Learning at Scale, by A. Feder Cooper


Between Randomness and Arbitrariness: Some Lessons for Reliable Machine Learning at Scale

by A. Feder Cooper

First submitted to arxiv on: 13 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Computers and Society (cs.CY); Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This dissertation tackles the crucial challenge of developing reliable measurements for machine learning (ML) models and their applications. To achieve this, it proposes criteria for designing meaningful metrics and methodologies for efficiently measuring these metrics at scale. The research vision outlined in this study brings together ML, law, and policy to create a new field of scholarship. Specifically, the dissertation explores three themes: quantifying arbitrariness in ML, taming randomness in uncertainty estimation and optimization algorithms, and evaluating generative-AI systems. By contributing to these areas, the dissertation demonstrates that reliable measurement for ML is deeply connected to research in law and policy.
Low GrooveSquid.com (original content) Low Difficulty Summary
This study looks at how we can measure machine learning models accurately. It’s like trying to get a good picture of what’s going on inside a complex system. The researchers want to make sure their measurements are reliable, not just one time, but also when the model is used many times or with lots of data. They’re working together with experts in law and policy to figure out how to measure ML models in a way that aligns with important values like fairness and transparency.

Keywords

* Artificial intelligence  * Machine learning  * Optimization