Loading Now

Summary of To Ensemble or Not: Assessing Majority Voting Strategies For Phishing Detection with Large Language Models, by Fouad Trad et al.


To Ensemble or Not: Assessing Majority Voting Strategies for Phishing Detection with Large Language Models

by Fouad Trad, Ali Chehab

First submitted to arxiv on: 29 Nov 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper investigates the effects of different prompts on Large Language Models (LLMs) and proposes three majority voting strategies for text classification, specifically phishing URL detection. It compares a prompt-based ensemble, which aggregates responses from a single LLM to various prompts, with a model-based ensemble, which aggregates responses from multiple LLMs to a single prompt, as well as a hybrid ensemble that combines both approaches. The results show that ensemble strategies perform best when individual components have similar performance levels, but may not be effective when there is a significant difference in performance.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper looks at how Large Language Models (LLMs) work better together than alone. It tries three ways to make them work together: one way is to ask the same LLM many questions and see what it says, another way is to ask different LLMS the same question and see what they say, and a third way is to do both. The results show that when the individual models are good at their job, working together can make them even better. But if one model is much better than the others, working together might not help.

Keywords

» Artificial intelligence  » Prompt  » Text classification