Loading Now

Summary of Training Implicit Networks For Image Deblurring Using Jacobian-free Backpropagation, by Linghai Liu et al.


Training Implicit Networks for Image Deblurring using Jacobian-Free Backpropagation

by Linghai Liu, Shuaicheng Tong, Lisa Zhao

First submitted to arxiv on: 3 Feb 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This abstract presents recent advancements in using implicit networks for solving inverse problems in imaging, where they outperform or match the performance of feedforward networks. Implicit networks excel by only requiring constant memory during backpropagation, but their training process is computationally expensive due to gradient calculations that involve solving large linear systems. This paper investigates Jacobian-free Backpropagation (JFB), a method that circumvents these calculations and reduces computational cost. Experimental results show JFB compares favorably with fine-tuned optimization schemes, state-of-the-art feedforward networks, and existing implicit networks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper explores new ways to solve image deblurring problems using a special type of computer model called an implicit network. Implicit networks are good at solving these types of problems because they don’t use up too much memory, but they can be tricky to train. To make training easier, the authors look at a method called Jacobian-free Backpropagation (JFB). They test JFB and find it works just as well as other methods, but faster.

Keywords

* Artificial intelligence  * Backpropagation  * Optimization