Loading Now

Summary of Vector-icl: In-context Learning with Continuous Vector Representations, by Yufan Zhuang et al.


Vector-ICL: In-context Learning with Continuous Vector Representations

by Yufan Zhuang, Chandan Singh, Liyuan Liu, Jingbo Shang, Jianfeng Gao

First submitted to arxiv on: 8 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores whether large language models (LLMs) can extend their in-context learning capabilities from textual data to continuous vectors from diverse domains obtained from black-box pretrained encoders. By aligning input data with an LLM’s embedding space through lightweight projectors, the authors observe that LLMs can effectively process and learn from these projected vectors, which they term Vector-ICL. The authors find that pretraining projectors with general language modeling objectives enables Vector-ICL, while task-specific finetuning further enhances performance. In various tasks and modalities, including text reconstruction, numerical function regression, text classification, summarization, molecule captioning, time-series classification, graph classification, and fMRI decoding, Vector-ICL often surpasses both few-shot ICL and domain-specific model or tuning.
Low GrooveSquid.com (original content) Low Difficulty Summary
Large language models can learn new things without needing a lot of data. This paper shows that these models can also understand numbers and other types of information from different areas, like science and medicine. The authors found that by changing the way they look at this information, the models could learn even better. They tested this on many different tasks and showed that it worked well.

Keywords

» Artificial intelligence  » Classification  » Embedding space  » Few shot  » Pretraining  » Regression  » Summarization  » Text classification  » Time series