Summary of Carte: Pretraining and Transfer For Tabular Learning, by Myung Jun Kim et al.
CARTE: Pretraining and Transfer for Tabular Learningby Myung Jun Kim, Léo Grinsztajn, Gaël VaroquauxFirst submitted…
CARTE: Pretraining and Transfer for Tabular Learningby Myung Jun Kim, Léo Grinsztajn, Gaël VaroquauxFirst submitted…
A Self-matching Training Method with Annotation Embedding Models for Ontology Subsumption Predictionby Yukihiro Shiraishi, Ken…
Repetition Improves Language Model Embeddingsby Jacob Mitchell Springer, Suhas Kotha, Daniel Fried, Graham Neubig, Aditi…
MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Casesby Zechun Liu, Changsheng Zhao, Forrest…
2D Matryoshka Sentence Embeddingsby Xianming Li, Zongxi Li, Jing Li, Haoran Xie, Qing LiFirst submitted…
MAPE-PPI: Towards Effective and Efficient Protein-Protein Interaction Prediction via Microenvironment-Aware Protein Embeddingby Lirong Wu, Yijun…
Towards Unified Task Embeddings Across Multiple Models: Bridging the Gap for Prompt-Based Large Language Models…
VN Network: Embedding Newly Emerging Entities with Virtual Neighborsby Yongquan He, Zihan Wang, Peng Zhang,…
E2USD: Efficient-yet-effective Unsupervised State Detection for Multivariate Time Seriesby Zhichen Lai, Huan Li, Dalin Zhang,…
Analysis of Using Sigmoid Loss for Contrastive Learningby Chungpa Lee, Joonhwan Chang, Jy-yong SohnFirst submitted…