Loading Now

Summary of A Bayesian Mixture Model Of Temporal Point Processes with Determinantal Point Process Prior, by Yiwei Dong et al.


A Bayesian Mixture Model of Temporal Point Processes with Determinantal Point Process Prior

by Yiwei Dong, Shaoxin Ye, Yuwen Cao, Qiyu Han, Hongteng Xu, Hanfang Yang

First submitted to arxiv on: 7 Nov 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper proposes a novel approach to asynchronous event sequence clustering using Bayesian mixture models of Temporal Point Processes with Determinantal Point Process priors (TP2DP2). The proposed method aims to group similar event sequences in an unsupervised manner, while avoiding overfitting and excessive cluster generation. The authors introduce an efficient posterior inference algorithm based on conditional Gibbs sampling, enabling automatic identification of the potential number of clusters and accurate grouping of sequences with similar features. The TP2DP2 framework is applicable to a wide range of parametric temporal point processes, including neural network-based models. Experimental results on both synthetic and real-world data demonstrate the effectiveness of the proposed method in producing moderately fewer yet more diverse mixture components, achieving outstanding results across multiple evaluation metrics.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps us group similar event sequences without needing to tell it how many groups to make or what each group looks like. It uses a special kind of math called Bayesian mixture models to find patterns in the data and put similar things together. The method is flexible and can work with different types of data, including those that involve artificial intelligence. When tested on real-world data, the approach did well at finding meaningful groups and avoiding too many or too few groups. This could be useful in many areas where event sequences are important, such as analyzing human behavior or financial transactions.

Keywords

» Artificial intelligence  » Clustering  » Inference  » Neural network  » Overfitting  » Unsupervised