Summary of Mogam: a Multimodal Object-oriented Graph Attention Model For Depression Detection, by Junyeop Cha et al.
MOGAM: A Multimodal Object-oriented Graph Attention Model for Depression Detection
by Junyeop Cha, Seoyun Kim, Dongjae Kim, Eunil Park
First submitted to arxiv on: 21 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary This paper presents a novel approach to detect depression on social media platforms. The authors aim to develop a scalable and versatile model that can handle different types of data, including text, images, and videos. They introduce the Multimodal Object-Oriented Graph Attention Model (MOGAM), which uses a cross-attention mechanism to aggregate multimodal features from vlogs. MOGAM achieved an accuracy of 0.871 and an F1-score of 0.888 when tested on vlogs from users with a clinical diagnosis. The authors also evaluated the model’s performance on a benchmark dataset, achieving comparable results to prior studies (0.61 F1-score). This work has potential benefits in the early detection and treatment of depression. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about using social media to detect depression. Right now, most ways to do this rely on specific features, which makes them hard to use with different types of data. The authors created a new model called MOGAM that can handle different types of data and does better than other methods. They tested it on vlogs from people who have been diagnosed with depression and got good results (87% accurate). They also tried it on a test dataset and did as well as other similar studies. |
Keywords
* Artificial intelligence * Attention * Cross attention * F1 score