Loading Now

Summary of The First Place Solution Of Wsdm Cup 2024: Leveraging Large Language Models For Conversational Multi-doc Qa, by Yiming Li and Zhao Zhang


The First Place Solution of WSDM Cup 2024: Leveraging Large Language Models for Conversational Multi-Doc QA

by Yiming Li, Zhao Zhang

First submitted to arxiv on: 28 Feb 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
A conversational multi-document question answering approach that leverages the capabilities of Large Language Models (LLMs) has been proposed. The approach was successfully applied to the “Conversational Multi-Doc QA” challenge in WSDM Cup 2024, where it ranked first place. To achieve this, the authors adapted LLMs to the task and developed a hybrid training strategy that utilized in-domain unlabeled data. Additionally, an advanced text embedding model was used to filter out irrelevant documents, and several ensemble approaches were designed and compared. The successful approach is showcased through its top-ranked performance in WSDM Cup 2024.
Low GrooveSquid.com (original content) Low Difficulty Summary
A new way of answering questions using computers has been developed. This method can look at many different documents and conversations to find the answer. It’s like having a super smart librarian who can help you find what you’re looking for! The researchers used something called Large Language Models (LLMs) to make their approach work. They also found ways to use this approach even better by using more data and special techniques to get rid of any irrelevant information. This new method did really well in a competition, beating other approaches. Now, anyone can see the code they used to make it work.

Keywords

* Artificial intelligence  * Embedding  * Question answering