Loading Now

Summary of Point Processes with Event Time Uncertainty, by Xiuyuan Cheng et al.


Point processes with event time uncertainty

by Xiuyuan Cheng, Tingnan Gong, Yao Xie

First submitted to arxiv on: 5 Nov 2024

Categories

  • Main: Machine Learning (stat.ML)
  • Secondary: Machine Learning (cs.LG); Statistics Theory (math.ST)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A novel framework for modeling time-uncertain point processes, possibly on a network, is introduced. This framework leverages continuous-time models with assumptions driven by application scenarios, followed by a discrete-time model facilitating inference and computation via first-order optimization methods like Gradient Descent or Variation inequality using batch-based Stochastic Gradient Descent. The parameter recovery guarantee is proved for VI inference at an O(1/k) convergence rate using k SGD steps. This framework handles non-stationary processes by modeling the inference kernel as a matrix (or tensor on a network), covering the classical Hawkes process as a special case. Experimental results demonstrate that this approach outperforms General Linear model baselines on simulated and real data, revealing meaningful causal relations.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way to analyze events happening over time is developed. This method can handle situations where we don’t know exactly when each event occurred. It uses mathematical models to figure out the patterns in these events. The approach works by converting a continuous-time model into a discrete-time one that’s easier to work with. This allows us to use optimization techniques like gradient descent or variation inequality to find the best parameters for the model. The method is shown to be effective on both simulated and real-world data, revealing important relationships between different events.

Keywords

» Artificial intelligence  » Gradient descent  » Inference  » Optimization  » Stochastic gradient descent