Loading Now

Summary of Falcon: Flop-aware Combinatorial Optimization For Neural Network Pruning, by Xiang Meng et al.


FALCON: FLOP-Aware Combinatorial Optimization for Neural Network Pruning

by Xiang Meng, Wenyu Chen, Riade Benbaki, Rahul Mazumder

First submitted to arxiv on: 11 Mar 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel combinatorial-optimization-based framework called FALCON is proposed to address the challenge of deploying neural networks on resource-constrained devices. The approach jointly considers model accuracy, floating-point operations (FLOPs), and sparsity constraints to reduce the computational cost while maintaining performance. This is achieved through an integer linear program (ILP) that simultaneously handles FLOP and sparsity constraints, along with a novel algorithm for approximately solving the ILP. A first-order method is also proposed as part of the optimization framework. The approach is demonstrated to achieve superior accuracy compared to other pruning approaches within a fixed FLOP budget, with improvements of up to 48% relative to state-of-the-art methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
FALCON is a new way to make neural networks smaller and faster for devices that don’t have a lot of power. It does this by looking at three things: how well the network works, how many calculations it needs to do (measured in FLOPs), and how sparse or empty it can be without losing performance. This is done using a special kind of math problem called an ILP that helps find the best way to make the network smaller while keeping its performance good. The new algorithm and approach are shown to work better than other methods for making neural networks faster and more efficient.

Keywords

* Artificial intelligence  * Optimization  * Pruning