Loading Now

Summary of Finder: Stochastic Mirroring Of Noisy Quasi-newton Search and Deep Network Training, by Uttam Suman et al.


FINDER: Stochastic Mirroring of Noisy Quasi-Newton Search and Deep Network Training

by Uttam Suman, Mariya Mamajiwala, Mukul Saxena, Ankit Tyagi, Debasish Roy

First submitted to arxiv on: 18 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed stochastic optimizer, FINDER (Filtering Informed Newton-like and Derivative-free Evolutionary Recursion), is designed for non-convex and possibly non-smooth objective functions over large-dimensional design spaces. By bridging noise-assisted global search with faster local convergence, FINDER combines the benefits of global optimization and local refinement. The algorithm exploits nonlinear stochastic filtering equations to derive a derivative-free update reminiscent of Newton’s method using the inverse Hessian. Simplifications and enhancements allow for linear scaling with dimension. FINDER is applied to IEEE benchmark functions, deep networks, and physics-informed deep networks, demonstrating its promise in large-dimensional optimization problems.
Low GrooveSquid.com (original content) Low Difficulty Summary
FINDER is a new way to optimize things when we don’t know exactly how they work. It’s like trying to find the best solution by searching around and then getting closer with small adjustments. The algorithm uses some clever math tricks to make it efficient and scalable for big problems. In this paper, FINDER is tested on several examples, including computer vision and physics problems, and shows great promise.

Keywords

* Artificial intelligence  * Optimization