Loading Now

Summary of Parameter-efficient Instruction Tuning Of Large Language Models For Extreme Financial Numeral Labelling, by Subhendu Khatuya et al.


Parameter-Efficient Instruction Tuning of Large Language Models For Extreme Financial Numeral Labelling

by Subhendu Khatuya, Rajdeep Mukherjee, Akash Ghosh, Manjunath Hegde, Koustuv Dasgupta, Niloy Ganguly, Saptarshi Ghosh, Pawan Goyal

First submitted to arxiv on: 3 May 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Computational Engineering, Finance, and Science (cs.CE); Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The abstract proposes a novel approach to annotating financial documents by leveraging Large Language Models (LLMs) and metric metadata information. The goal is to automatically label numerals occurring in these documents with their corresponding XBRL tags, which is an extreme classification problem. The authors investigate the feasibility of solving this problem using instruction tuning of LLMs and propose a parameter-efficient solution called LoRA. Experimental results on two recently released datasets show that the proposed model, FLAN-FinXC, achieves state-of-the-art performances, outperforming several strong baselines.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper explores how to automatically label financial documents by using large language models. The goal is to help computers understand what numbers in these documents mean. To do this, they use information about the numbers and their meanings to train a special type of AI model. This allows them to correctly identify the labels for most of the numbers in the documents.

Keywords

» Artificial intelligence  » Classification  » Instruction tuning  » Lora  » Parameter efficient