Summary of Plpp: Prompt Learning with Perplexity Is Self-distillation For Vision-language Models, by Biao Liu et al.
PLPP: Prompt Learning with Perplexity Is Self-Distillation for Vision-Language Modelsby Biao Liu, Wenyi Fang, Xiaoyu…
PLPP: Prompt Learning with Perplexity Is Self-Distillation for Vision-Language Modelsby Biao Liu, Wenyi Fang, Xiaoyu…
Advanced ingestion process powered by LLM parsing for RAG systemby Arnau Perez, Xavier VizcainoFirst submitted…
Crabs: Consuming Resource via Auto-generation for LLM-DoS Attack under Black-box Settingsby Yuanhe Zhang, Zhenhong Zhou,…
LITA: An Efficient LLM-assisted Iterative Topic Augmentation Frameworkby Chia-Hsuan Chang, Jui-Tse Tsai, Yi-Hang Tsai, San-Yih…
Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMsby Yuchen Fu, Zifeng…
BioBridge: Unified Bio-Embedding with Bridging Modality in Code-Switched EMRby Jangyeong Jeon, Sangyeon Cho, Dongjoon Lee,…
Video Representation Learning with Joint-Embedding Predictive Architecturesby Katrina Drozdov, Ravid Shwartz-Ziv, Yann LeCunFirst submitted to…
Learning to Merge Tokens via Decoupled Embedding for Efficient Vision Transformersby Dong Hoon Lee, Seunghoon…
WordVIS: A Color Worth A Thousand Wordsby Umar Khan, Saifullah, Stefan Agne, Andreas Dengel, Sheraz…
When Text Embedding Meets Large Language Model: A Comprehensive Surveyby Zhijie Nie, Zhangchi Feng, Mingxin…