Loading Now

Summary of Efficient Sign-based Optimization: Accelerating Convergence Via Variance Reduction, by Wei Jiang et al.


Efficient Sign-Based Optimization: Accelerating Convergence via Variance Reduction

by Wei Jiang, Sifan Yang, Wenhao Yang, Lijun Zhang

First submitted to arxiv on: 1 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Optimization and Control (math.OC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Sign stochastic gradient descent (signSGD) is a communication-efficient method that achieves improved convergence rates by introducing the Sign-based Stochastic Variance Reduction (SSVR) method, which employs variance reduction estimators to track gradients and leverages their signs for updating. This paper improves upon existing literature’s (d{1/2}T{-1/4}) rate by achieving a new rate of (d{1/2}T{-1/3}). Additionally, the authors investigate heterogeneous majority vote in distributed settings and introduce two novel algorithms that outperform previous results. The proposed methods achieve convergence rates of (m{1/4}d{1/2}T^{-1/2}), (d{1/2}T{-1/2} + dn^{-1/2}), and (d{1/4}T{-1/4}). Numerical experiments validate the effectiveness of these methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
SignSGD is a communication-efficient method that only sends the sign of gradients. This helps with communication costs. The paper improves this by introducing SSVR, which uses variance reduction to track gradients and update parameters. It gets a better convergence rate than before: (d{1/2}T{-1/3}). They also look at how to make it work in distributed settings, where different nodes are connected. Two new algorithms do even better: (m{1/4}d{1/2}T^{-1/2}), (d{1/2}T{-1/2} + dn^{-1/2}), and (d{1/4}T{-1/4}). The paper shows this works with some examples.

Keywords

» Artificial intelligence  » Stochastic gradient descent