Loading Now

Summary of Scaled and Inter-token Relation Enhanced Transformer For Sample-restricted Residential Nilm, by Minhajur Rahman et al.


Scaled and Inter-token Relation Enhanced Transformer for Sample-restricted Residential NILM

by Minhajur Rahman, Yasir Arafat

First submitted to arxiv on: 12 Oct 2024

Categories

  • Main: Computation and Language (cs.CL)
  • Secondary: Artificial Intelligence (cs.AI)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
Transformers have shown impressive performance across multiple domains due to their self-attention mechanism, which captures complex relationships in data. However, training on smaller datasets poses challenges, as standard attention mechanisms can over-smooth attention scores and overly prioritize intra-token relationships, reducing the capture of meaningful inter-token dependencies critical for tasks like Non-Intrusive Load Monitoring (NILM). To address this, we propose a novel transformer architecture with two key innovations: inter-token relation enhancement and dynamic temperature tuning. The inter-token relation enhancement mechanism removes diagonal entries in the similarity matrix to improve attention focus on inter-token relations. The dynamic temperature tuning mechanism, a learnable parameter, adapts attention sharpness during training, preventing over-smoothing and enhancing sensitivity to token relationships. We validate our method on the REDD dataset and show that it outperforms the original transformer and state-of-the-art models by 10-15% in F1 score across various appliance types, demonstrating its efficacy for training on smaller datasets.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper is about finding a way to make transformers work better when they’re trained on small amounts of data. Transformers are good at understanding relationships between things, but when there’s not much data, they can get confused and miss important connections. The researchers came up with two new ideas to help: one helps the transformer focus more on the relationships between different parts, and another makes sure the transformer doesn’t get too confused and miss important details. They tested their new approach on a dataset called REDD and found that it worked better than other methods.

Keywords

» Artificial intelligence  » Attention  » F1 score  » Self attention  » Temperature  » Token  » Transformer