Loading Now

Summary of Scalable Multi-output Gaussian Processes with Stochastic Variational Inference, by Xiaoyu Jiang et al.


Scalable Multi-Output Gaussian Processes with Stochastic Variational Inference

by Xiaoyu Jiang, Sokratia Georgaka, Magnus Rattray, Mauricio A. Alvarez

First submitted to arxiv on: 2 Jul 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Machine Learning (stat.ML)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The Multi-Output Gaussian Process (MOGP) is a powerful tool for modeling data from multiple sources. The Linear Model of Coregionalization (LMC) is a common choice to build a covariance function for MOGPs, but it has limitations. The Latent Variable MOGP (LV-MOGP) generalizes the idea by modeling the covariance between outputs using kernels applied to latent variables, allowing for efficient generalization to new outputs with few data points. However, LV-MOGP’s computational complexity grows linearly with the number of outputs, making it unsuitable for problems with many outputs. To address this issue, we propose a stochastic variational inference approach for LV-MOGPs that allows mini-batches for both inputs and outputs, making the computational complexity per training iteration independent of the number of outputs.
Low GrooveSquid.com (original content) Low Difficulty Summary
The Multi-Output Gaussian Process is a special kind of tool used to understand data from multiple sources. It’s like trying to understand how different people are related. The paper talks about two ways to do this: one way is called LMC, and it’s good for some things, but not all. The other way is called LV-MOGP, and it’s better because it can handle lots of outputs (like understanding lots of people). But LV-MOGP is slow when there are many outputs. To fix this problem, the researchers came up with a new way to use LV-MOGP that makes it faster.

Keywords

* Artificial intelligence  * Generalization  * Inference