Loading Now

Summary of Cross-lingual Multi-hop Knowledge Editing, by Aditi Khandelwal et al.


Cross-Lingual Multi-Hop Knowledge Editing

by Aditi Khandelwal, Harman Singh, Hengrui Gu, Tianlong Chen, Kaixiong Zhou

First submitted to arxiv on: 14 Jul 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed Cross-Lingual Multi-Hop Knowledge Editing paradigm measures the performance of various knowledge editing techniques in a cross-lingual setup. The authors create a parallel benchmark, CROLIN-MQUAKE, to evaluate these techniques. The analysis reveals significant gaps in performance between cross-lingual and English-centric settings. To address this, the authors propose an improved system, CLEVER-CKE, which uses a retrieve, verify, and generate framework for knowledge editing. This framework incorporates language-aware and hard-negative based contrastive objectives to improve fact retrieval and verification. The proposed method is evaluated on three LLMs, eight languages, and two datasets, showing significant gains of up to 30% over prior methods.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models need to adapt quickly to new information from anywhere in the world. Most research focuses on updating English-only models, but this paper looks at how well different techniques work when applied across many languages. The authors create a special benchmark to test these techniques and find that they don’t perform as well when used with languages other than English. To fix this, they propose a new way of editing knowledge called CLEVER-CKE, which uses a retrieval, verification, and generation process to make sure the model stays up-to-date.

Keywords

» Artificial intelligence