Summary of Towards Continual Knowledge Graph Embedding Via Incremental Distillation, by Jiajun Liu et al.
Towards Continual Knowledge Graph Embedding via Incremental Distillationby Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu…
Towards Continual Knowledge Graph Embedding via Incremental Distillationby Jiajun Liu, Wenjun Ke, Peng Wang, Ziyu…
Efficient Text-driven Motion Generation via Latent Consistency Trainingby Mengxian Hu, Minghao Zhu, Xun Zhou, Qingqing…
Explicit Correlation Learning for Generalizable Cross-Modal Deepfake Detectionby Cai Yu, Shan Jia, Xiaomeng Fu, Jin…
Grounded Compositional and Diverse Text-to-3D with Pretrained Multi-View Diffusion Modelby Xiaolong Li, Jiawei Mo, Ying…
CNN2GNN: How to Bridge CNN with GNNby Ziheng Jiao, Hongyuan Zhang, Xuelong LiFirst submitted to…
Robust Noisy Label Learning via Two-Stream Sample Distillationby Sihan Bai, Sanping Zhou, Zheng Qin, Le…
Self-supervised Dataset Distillation: A Good Compression Is All You Needby Muxin Zhou, Zeyuan Yin, Shitong…
Do We Really Need a Complex Agent System? Distill Embodied Agent into a Single Modelby…
SCANNER: Knowledge-Enhanced Approach for Robust Multi-modal Named Entity Recognition of Unseen Entitiesby Hyunjong Ok, Taeho…
Gecko: Versatile Text Embeddings Distilled from Large Language Modelsby Jinhyuk Lee, Zhuyun Dai, Xiaoqi Ren,…