Summary of A Korean Legal Judgment Prediction Dataset For Insurance Disputes, by Alice Saebom Kwak et al.
A Korean Legal Judgment Prediction Dataset for Insurance Disputes
by Alice Saebom Kwak, Cheonkam Jeong, Ji Weon Lim, Byeongcheol Min
First submitted to arxiv on: 26 Jan 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Machine Learning (cs.LG)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper presents a dataset for predicting legal judgments in insurance disputes, introducing the Korean Legal Judgment Prediction (LJP) dataset. By developing successful LJP models, insurance companies and their customers can benefit from time- and cost-savings through dispute mediation process prediction. The study focuses on mitigating limitations due to limited data availability in low-resource languages like Korean, exploring alternative approaches to fine-tuning Sentence Transformers. Results show that the proposed SetFit approach achieves comparable performance to benchmark models despite using significantly less training data. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary This paper helps predict how insurance disputes will end if they go to mediation. It creates a special dataset for this task in Korean, which is helpful because there’s not much data available. The study finds that a new way of fine-tuning sentence transformers works well even when there’s not as much data as usual. This means we can make good predictions with less data, which could be useful. |
Keywords
* Artificial intelligence * Fine tuning