Summary of Jmi at Semeval 2024 Task 3: Two-step Approach For Multimodal Ecac Using In-context Learning with Gpt and Instruction-tuned Llama Models, by Arefa et al.
JMI at SemEval 2024 Task 3: Two-step approach for multimodal ECAC using in-context learning with GPT and instruction-tuned Llama models
by Arefa, Mohammed Abbas Ansari, Chandni Saxena, Tanvir Ahmad
First submitted to arxiv on: 5 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a two-step framework for developing an efficient multimodal emotion cause analysis (ECA) system to capture emotions in human conversations integrating text, audio, and video modalities. The approach employs instruction-tuning with Llama 2 models for emotion and cause prediction or uses GPT-4V for conversation-level video description and in-context learning with annotated conversations using GPT 3.5. The proposed solutions achieve significant performance gains, demonstrating the effectiveness of the system. The authors’ approach wins rank 4 in SemEval-2024 Task 3: “The Competition of Multimodal Emotion Cause Analysis in Conversations”. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper is about a new way to understand emotions in conversations by combining text, audio, and video. Right now, it’s hard to develop a system that can do this because each modality has its own challenges. The authors propose two different approaches to solve this problem. They use Llama 2 models to predict emotions and causes or GPT-4V to describe videos based on conversations. The results show that their approach is very effective and performs well in a competition. |
Keywords
* Artificial intelligence * Gpt * Instruction tuning * Llama