Summary of Lazyllm: Dynamic Token Pruning For Efficient Long Context Llm Inference, by Qichen Fu et al.
LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inferenceby Qichen Fu, Minsik Cho, Thomas…
LazyLLM: Dynamic Token Pruning for Efficient Long Context LLM Inferenceby Qichen Fu, Minsik Cho, Thomas…
MSCT: Addressing Time-Varying Confounding with Marginal Structural Causal Transformer for Counterfactual Post-Crash Traffic Predictionby Shuang…
X-Former: Unifying Contrastive and Reconstruction Learning for MLLMsby Sirnam Swetha, Jinyu Yang, Tal Neiman, Mamshad…
Mechanistically Interpreting a Transformer-based 2-SAT Solver: An Axiomatic Approachby Nils Palumbo, Ravi Mangal, Zifan Wang,…
PASTA: Controllable Part-Aware Shape Generation with Autoregressive Transformersby Songlin Li, Despoina Paschalidou, Leonidas GuibasFirst submitted…
A light-weight and efficient punctuation and word casing prediction model for on-device streaming ASRby Jian…
HHGT: Hierarchical Heterogeneous Graph Transformer for Heterogeneous Graph Representation Learningby Qiuyu Zhu, Liang Zhang, Qianxiong…
SpaDiT: Diffusion Transformer for Spatial Gene Expression Prediction using scRNA-seqby Xiaoyu Li, Fangfang Zhu, Wenwen…
Evaluating Large Language Models for Anxiety and Depression Classification using Counseling and Psychotherapy Transcriptsby Junwei…
Transformers with Stochastic Competition for Tabular Data Modellingby Andreas Voskou, Charalambos Christoforou, Sotirios ChatzisFirst submitted…