Summary of Secokd: Aligning Large Language Models For In-context Learning with Fewer Shots, by Weixing Wang et al.
SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shotsby Weixing Wang, Haojin Yang,…
SeCoKD: Aligning Large Language Models for In-Context Learning with Fewer Shotsby Weixing Wang, Haojin Yang,…
ZeroDL: Zero-shot Distribution Learning for Text Clustering via Large Language Modelsby Hwiyeol Jo, Hyunwoo Lee,…
SeTAR: Out-of-Distribution Detection with Selective Low-Rank Approximationby Yixia Li, Boya Xiong, Guanhua Chen, Yun ChenFirst…
MultiSocial: Multilingual Benchmark of Machine-Generated Text Detection of Social-Media Textsby Dominik Macko, Jakub Kopal, Robert…
DistillNeRF: Perceiving 3D Scenes from Single-Glance Images by Distilling Neural Fields and Foundation Model Featuresby…
How Far Can In-Context Alignment Go? Exploring the State of In-Context Alignmentby Heyan Huang, Yinghao…
Fusion Makes Perfection: An Efficient Multi-Grained Matching Approach for Zero-Shot Relation Extractionby Shilong Li, Ge…
InstructCMP: Length Control in Sentence Compression through Instruction-based Large Language Modelsby Juseon-Do, Jingun Kwon, Hidetaka…
Boosting Medical Image Classification with Segmentation Foundation Modelby Pengfei Gu, Zihan Zhao, Hongxiao Wang, Yaopeng…
Grading Massive Open Online Courses Using Large Language Modelsby Shahriar Golchin, Nikhil Garuda, Christopher Impey,…