Loading Now

Summary of Dependable Distributed Training Of Compressed Machine Learning Models, by Francesco Malandrino and Giuseppe Di Giacomo and Marco Levorato and Carla Fabiana Chiasserini


Dependable Distributed Training of Compressed Machine Learning Models

by Francesco Malandrino, Giuseppe Di Giacomo, Marco Levorato, Carla Fabiana Chiasserini

First submitted to arxiv on: 22 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The existing research on distributed machine learning model training overlooks the distribution of achieved learning quality, focusing solely on its average value. This leads to unreliable models whose performance may be worse than expected. To address this gap, we propose DepL, a framework for dependable learning orchestration that makes decisions on data selection, model choice, and resource allocation. Our approach considers various available models, including full DNNs and their compressed versions. Unlike previous studies, DepL guarantees target learning quality with a target probability while minimizing training costs. We prove that DepL has constant competitive ratio and polynomial complexity, outperforming the state-of-the-art by over 27% and matching the optimum.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about making machine learning models more reliable. Right now, researchers only look at how well a model performs on average, but not how it performs in different situations. This can lead to models that don’t work well when you need them to. The authors propose a new way of training models called DepL, which helps choose the right data and models for the job. They tested their approach and found it works better than current methods by over 27%. This is important because reliable machine learning models can be used in many areas, such as healthcare or finance.

Keywords

* Artificial intelligence  * Machine learning  * Probability