Loading Now

Summary of Labor-llm: Language-based Occupational Representations with Large Language Models, by Susan Athey et al.


LABOR-LLM: Language-Based Occupational Representations with Large Language Models

by Susan Athey, Herman Brunborg, Tianyu Du, Ayush Kanodia, Keyon Vafa

First submitted to arxiv on: 25 Jun 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Computation and Language (cs.CL); Econometrics (econ.EM)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper introduces CAREER, a transformer-based econometric model that predicts a worker’s next job based on their career history. The original CAREER model was trained using a large unrepresentative resume dataset and then fine-tuned with data from a representative survey. This resulted in better predictive performance than benchmarks. In this study, the authors explore an alternative approach where the resume-based foundation model is replaced by a large language model (LLM). They convert tabular data into text files resembling resumes and fine-tune the LLMs to predict the next token (word). The resulting fine-tuned LLM is used as input to an occupation model, achieving better predictive performance than prior models. Additionally, the authors demonstrate the value of fine-tuning and show that by adding more career data from a different population, fine-tuning smaller LLMs can surpass the performance of fine-tuning larger models.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper is about using artificial intelligence to predict what job someone will have next based on their work history. The authors tested an old model called CAREER and found that it worked better than other methods. They then tried a new approach where they used a special kind of computer program called a large language model (LLM) to make the prediction. This new method also did well, especially when combined with more information about different types of jobs.

Keywords

* Artificial intelligence  * Fine tuning  * Large language model  * Token  * Transformer