Loading Now

Summary of Nercc: Nested-regression Coded Computing For Resilient Distributed Prediction Serving Systems, by Parsa Moradi et al.


NeRCC: Nested-Regression Coded Computing for Resilient Distributed Prediction Serving Systems

by Parsa Moradi, Mohammad Ali Maddah-Ali

First submitted to arxiv on: 6 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Distributed, Parallel, and Cluster Computing (cs.DC); Information Theory (cs.IT)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed framework, NeRCC, aims to enhance resilience against stragglers in prediction serving systems. This is achieved through a three-layered approach: encoding regression and sampling, computing with inference on coded data points, and decoding regression and sampling for approximate recovery of original predictions. The framework relies on two interconnected regression models, which are jointly optimized using regularization terms. NeRCC demonstrates superior performance over state-of-the-art methods in various machine learning models, including LeNet5, RepVGG, and Vision Transformer (ViT), with an improvement up to 23%.
Low GrooveSquid.com (original content) Low Difficulty Summary
Imagine you have a special kind of computer that can make predictions based on the data it gets. Sometimes these computers get slow or stuck, which is bad because they’re not making accurate predictions anymore. A team of researchers came up with a new way to make these computers more reliable by creating a system called NeRCC. This system has three parts: first, it takes the original data and makes it look like random noise, then it lets multiple computers work on this noisy data, and finally, it figures out how to turn the results back into accurate predictions. The researchers tested this system with different kinds of data and computer models, and it did much better than other systems that do something similar.

Keywords

* Artificial intelligence  * Inference  * Machine learning  * Regression  * Regularization  * Vision transformer  * Vit