Summary of Vlkeb: a Large Vision-language Model Knowledge Editing Benchmark, by Han Huang et al.
VLKEB: A Large Vision-Language Model Knowledge Editing Benchmark
by Han Huang, Haitian Zhong, Tao Yu, Qiang Liu, Shu Wu, Liang Wang, Tieniu Tan
First submitted to arxiv on: 12 Mar 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI); Computer Vision and Pattern Recognition (cs.CV)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper proposes a new benchmark for editing Large Vision-Language Models (LVLMs) that addresses the limitations of existing benchmarks. The authors argue that existing methods fall short in evaluating the quality of synthesized evaluation images and assessing whether models apply edited knowledge in relevant content. To address these challenges, they construct a new benchmark called VLKEB, which includes a more comprehensive Portability metric. They also propose a multi-modal knowledge graph to bind image data with knowledge entities, allowing for the extraction of entity-related knowledge that can be used as editing data. The authors conduct experiments on five LVLMs using different editing methods and analyze the results to reveal strengths and deficiencies. This research aims to provide insights for future studies in this area. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper is about improving a way to edit Large Vision-Language Models (LVLMs). These models are special because they can understand both words and pictures. Right now, there’s no good way to test how well these models work when someone edits them. The authors of the paper think that existing tests don’t give accurate results. They created a new test called VLKEB that includes more metrics to help evaluate how well the models are edited. They also made a special tool to connect images with knowledge, which can be used to edit the models. The authors tested their methods on five different models and found out what works best. |
Keywords
» Artificial intelligence » Knowledge graph » Multi modal