Loading Now

Summary of Robust Deep Learning From Weakly Dependent Data, by William Kengne and Modou Wade


Robust deep learning from weakly dependent data

by William Kengne, Modou Wade

First submitted to arxiv on: 8 May 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Statistics Theory (math.ST)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper explores the theoretical properties of deep neural networks when dealing with weakly dependent observations, unbounded loss functions, and unbounded input/output. Building on previous work, it establishes non-asymptotic bounds for the expected excess risk of deep neural network estimators under strong mixing and ψ-weak dependence assumptions. The study also investigates the relationship between these bounds and the order of moments (r) of the output variable. When r is infinite, the convergence rate is similar to known results. For target predictors in Hölder smooth functions with sufficient smoothness, the rate of expected excess risk for exponentially strongly mixing data matches that of i.i.d. samples. The paper applies these findings to robust nonparametric regression and autoregression, demonstrating improved performance using absolute loss and Huber loss functions over least squares methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper looks at how deep learning works when we have a lot of information that’s connected but not independent. It shows us how well our models can do in this situation, even if the data is really bad (has no bounds). The study also talks about how smooth our predictions are and how they compare to using perfect, independent data. Finally, it uses these ideas to make better models for nonparametric regression and autoregression.

Keywords

» Artificial intelligence  » Deep learning  » Neural network  » Regression