Summary of Self-explainable Graph Transformer For Link Sign Prediction, by Lu Li et al.
Self-Explainable Graph Transformer for Link Sign Predictionby Lu Li, Jiale Liu, Xingyu Ji, Maojun Wang,…
Self-Explainable Graph Transformer for Link Sign Predictionby Lu Li, Jiale Liu, Xingyu Ji, Maojun Wang,…
Enhanced Structured State Space Models via Grouped FIR Filtering and Attention Sink Mechanismsby Tian Meng,…
What Are Good Positional Encodings for Directed Graphs?by Yinan Huang, Haoyu Wang, Pan LiFirst submitted…
Advanced deep-reinforcement-learning methods for flow control: group-invariant and positional-encoding networks improve learning speed and qualityby…
Dynamic Graph Transformer with Correlated Spatial-Temporal Positional Encodingby Zhe Wang, Sheng Zhou, Jiawei Chen, Zhen…
Relaxing Graph Transformers for Adversarial Attacksby Philipp Foth, Lukas Gosch, Simon Geisler, Leo Schwinn, Stephan…
Learning High-Frequency Functions Made Easy with Sinusoidal Positional Encodingby Chuanhao Sun, Zhihang Yuan, Kai Xu,…
A Predictive Model Based on Transformer with Statistical Feature Embedding in Manufacturing Sensor Datasetby Gyeong…
Multi-State-Action Tokenisation in Decision Transformers for Multi-Discrete Action Spacesby Perusha Moodley, Pramod Kaushik, Dhillu Thambi,…
Learning positional encodings in transformers depends on initializationby Takuya Ito, Luca Cocchi, Tim Klinger, Parikshit…