Summary of Zero-shot Spam Email Classification Using Pre-trained Large Language Models, by Sergio Rojas-galeano
Zero-Shot Spam Email Classification Using Pre-trained Large Language Models
by Sergio Rojas-Galeano
First submitted to arxiv on: 24 May 2024
Categories
- Main: Computation and Language (cs.CL)
- Secondary: Artificial Intelligence (cs.AI)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary The paper explores the use of pre-trained large language models (LLMs) for spam email classification using zero-shot prompting. It evaluates the performance of both open-source and proprietary LLMs on the SpamAssassin dataset, employing two classification approaches: truncated raw content and summaries generated by ChatGPT. The results show promising performance, with Flan-T5 achieving a 90% F1-score and GPT-4 reaching a 95% F1-score. However, further validation is needed to confirm the findings on diverse datasets. The high operational costs of proprietary models and inference costs of LLMs may hinder real-world deployment for spam filtering. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary The paper looks at how big language models can be used to classify spam emails without needing any special training. It tests open-source and private models, like Flan-T5 and ChatGPT, on a dataset called SpamAssassin. The results show that these models are pretty good at recognizing spam emails. However, more testing is needed to make sure this works well with different datasets. One problem is that the private models are expensive to use and process, which might stop them from being used in real-life email filters. |
Keywords
» Artificial intelligence » Classification » F1 score » Gpt » Inference » Prompting » T5 » Zero shot