Loading Now

Summary of A Semantic Mention Graph Augmented Model For Document-level Event Argument Extraction, by Jian Zhang et al.


A Semantic Mention Graph Augmented Model for Document-Level Event Argument Extraction

by Jian Zhang, Changlin Yang, Haiping Zhu, Qika Lin, Fangzhi Xu, Jun Liu

First submitted to arxiv on: 12 Mar 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Graph Augmented Model (GAM) tackles two unresolved problems in Document-level Event Argument Extraction (DEAE): independent modeling of entity mentions and document-prompt isolation. GAM constructs a semantic mention graph capturing relations within and between documents and prompts, leveraging co-existence, co-reference, and co-type relations. The ensembled graph transformer module effectively addresses mentions and their semantic relations. The graph-augmented encoder-decoder module incorporates the relation-specific graph into pre-trained language models (PLMs), optimizing the encoder section with topology information to enhance comprehension. Experiments on RAMS and WikiEvents datasets demonstrate GAM’s effectiveness, surpassing baseline methods and achieving a new state-of-the-art performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
GAM is a new way to extract arguments from unstructured documents. It solves two problems that previous approaches didn’t: identifying individual mentions (like people or places) in documents and separating them from the text prompts that guide pre-trained language models. GAM creates a graph that shows relationships between these mentions, as well as their roles within documents. This helps PLMs understand the context of each mention better. The model is tested on two datasets and performs much better than previous methods.

Keywords

» Artificial intelligence  » Encoder  » Encoder decoder  » Prompt  » Transformer