Loading Now

Summary of Challenging Gradient Boosted Decision Trees with Tabular Transformers For Fraud Detection at Booking.com, by Sergei Krutikov (1) et al.


Challenging Gradient Boosted Decision Trees with Tabular Transformers for Fraud Detection at Booking.com

by Sergei Krutikov, Bulat Khaertdinov, Rodion Kiriukhin, Shubham Agrawal, Kees Jan De Vries

First submitted to arxiv on: 22 May 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: None

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper challenges Gradient Boosted Decision Trees (GBDT) with tabular Transformers in fraud detection, a typical task in e-commerce. The authors aim to address selection bias, where production systems affect which data becomes labeled. By leveraging Self-Supervised Learning (SSL), the study trains tabular Transformers on vast amounts of data and fine-tunes them on smaller target datasets. The proposed approach outperforms heavily tuned GBDTs by a considerable margin in Average Precision (AP) score. Pre-trained models show more consistent performance when fine-tuning data is limited, requiring less labeled data to achieve comparable performance to their GBDT competitor.
Low GrooveSquid.com (original content) Low Difficulty Summary
This research paper compares two methods for detecting fraud: Gradient Boosted Decision Trees and Transformers. They want to solve a problem where the system that detects fraud is biased towards certain types of data. The authors train the Transformers on a lot of data and then fine-tune them for specific tasks. They found that the pre-trained Transformers performed better than GBDTs when they didn’t have enough labeled data.

Keywords

» Artificial intelligence  » Fine tuning  » Precision  » Self supervised