Loading Now

Summary of On Parameter Estimation in Deviated Gaussian Mixture Of Experts, by Huy Nguyen and Khai Nguyen and Nhat Ho


On Parameter Estimation in Deviated Gaussian Mixture of Experts

by Huy Nguyen, Khai Nguyen, Nhat Ho

First submitted to arxiv on: 7 Feb 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The authors investigate the problem of estimating parameters in a deviated Gaussian mixture of experts. The data are generated from a combination of a known function and a Gaussian mixture with unknown parameters. This setting arises from testing whether data follows a specific distribution or a more complex mixture model. To tackle this challenge, the authors design novel Voronoi-based loss functions that capture the convergence rates of maximum likelihood estimation (MLE) for these models. Compared to the commonly used generalized Wasserstein loss function, these new loss functions provide more accurate local convergence rates.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores how to estimate parameters in a special type of mixture model. This model combines a known function with many unknown Gaussian functions. The authors want to know if this data comes from just one of those functions or the whole combination. They create new “Voronoi-based” methods that help us understand how fast we can get good estimates of these parameters. These new methods are better than another popular method called the generalized Wasserstein loss function.

Keywords

* Artificial intelligence  * Likelihood  * Loss function  * Mixture model  * Mixture of experts