Loading Now

Summary of Multi-convformer: Extending Conformer with Multiple Convolution Kernels, by Darshan Prabhu et al.


Multi-Convformer: Extending Conformer with Multiple Convolution Kernels

by Darshan Prabhu, Yifan Peng, Preethi Jyothi, Shinji Watanabe

First submitted to arxiv on: 4 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG); Sound (cs.SD); Audio and Speech Processing (eess.AS)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces Multi-Convformer, a novel architecture that uses multiple convolution kernels within the convolution module of the Conformer, in conjunction with gating, for improved modeling of local dependencies at varying granularities. This approach rivals existing Conformer variants such as CgMLP and E-Branchformer in performance while being more parameter efficient. The Multi-Convformer is compared to Conformer and its variants across four different datasets and three different modeling paradigms, showing up to 8% relative word error rate (WER) improvements.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper improves automatic speech recognition by using multiple convolution kernels within the Conformer model. This helps recognize words more accurately. It compares well with other models like CgMLP and E-Branchformer. The results are based on four different datasets and three ways of modeling.

Keywords

* Artificial intelligence  * Parameter efficient