Summary of Askchart: Universal Chart Understanding Through Textual Enhancement, by Xudong Yang et al.
AskChart: Universal Chart Understanding through Textual Enhancement
by Xudong Yang, Yifan Wu, Yizhang Zhu, Nan Tang, Yuyu Luo
First submitted to arxiv on: 26 Dec 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The proposed AskChart model integrates both textual and visual cues from charts using a Mixture of Experts (MoE) architecture to facilitate the learning of enhanced visual-textual representations. This approach enables effective handling of multiple chart understanding tasks, while maintaining a smaller model size compared to existing models. The paper introduces a large-scale dataset named ChartBank with about 7.5M data samples to capture the synergy between visual and textual modalities. A three-stage training strategy is designed to align visual and textual modalities for learning robust visual-textual representations and optimizing the learning of the MoE layer. Experimental results across five datasets demonstrate significant performance gains in four chart understanding tasks, outperforming state-of-the-art models by 68.3% in Open-ended ChartQA and 49.2% in Chart-to-Text tasks. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary AskChart is a new way to understand charts better. It uses both the text and pictures in charts to get information. This helps people ask questions about or turn chart data into structured formats. Old methods only looked at the pictures, not the important text inside the charts. This makes it hard for humans to understand charts. The AskChart model is smaller and faster than other models that do similar things. It can also learn from a big dataset with over 7 million examples. This helps it get better at understanding different types of chart questions. |
Keywords
» Artificial intelligence » Mixture of experts