Summary of Deep Linear Hawkes Processes, by Yuxin Chang et al.
Deep Linear Hawkes Processes
by Yuxin Chang, Alex Boyd, Cao Xiao, Taha Kass-Hout, Parminder Bhatia, Padhraic Smyth, Andrew Warrington
First submitted to arxiv on: 27 Dec 2024
Categories
- Main: Machine Learning (stat.ML)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper addresses shortcomings in traditional point process models for marked temporal point processes (MTPPs), which are used to analyze sequences of events with irregular arrival times. By combining deep state-space models and linear Hawkes processes, the authors propose the deep linear Hawkes process (DLHP) model. This novel approach modifies the linear differential equations in deep SSMs to be stochastic jump differential equations, similar to LHPs. The DLHP model can be implemented efficiently using a parallel scan, allowing for parallelism and linear scaling. In contrast, attention-based MTPPs scale quadratically, while RNN-based MTPPs do not parallelize across sequence length. Empirical results show that DLHP models match or outperform existing models on eight real-world datasets. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to model events that happen at different times and in different ways. The old methods didn’t work well, so the authors combined two types of models to make something better. This new model is called the deep linear Hawkes process (DLHP). It’s like a special recipe that mixes together ideas from two other models. This recipe lets us analyze events more accurately and quickly than before. The authors tested their new model on many real-world examples and found it works well. |
Keywords
» Artificial intelligence » Attention » Rnn