Summary of Velora: Memory Efficient Training Using Rank-1 Sub-token Projections, by Roy Miles et al.
VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projectionsby Roy Miles, Pradyumna Reddy, Ismail Elezi, Jiankang…
VeLoRA: Memory Efficient Training using Rank-1 Sub-Token Projectionsby Roy Miles, Pradyumna Reddy, Ismail Elezi, Jiankang…
CLIBD: Bridging Vision and Genomics for Biodiversity Monitoring at Scaleby ZeMing Gong, Austin T. Wang,…
XL3M: A Training-free Framework for LLM Length Extension Based on Segment-wise Inferenceby Shengnan Wang, Youhui…
Vision-and-Language Navigation Generative Pretrained Transformerby Wen HanlinFirst submitted to arxiv on: 27 May 2024CategoriesMain: Artificial…
Exploring the LLM Journey from Cognition to Expression with Linear Representationsby Yuzi Yan, Jialian Li,…
ReflectionCoder: Learning from Reflection Sequence for Enhanced One-off Code Generationby Houxing Ren, Mingjie Zhan, Zhongyuan…
LLM-Optic: Unveiling the Capabilities of Large Language Models for Universal Visual Groundingby Haoyu Zhao, Wenhang…
TEII: Think, Explain, Interact and Iterate with Large Language Models to Solve Cross-lingual Emotion Detectionby…
GreenCOD: A Green Camouflaged Object Detection Methodby Hong-Shuo Chen, Yao Zhu, Suya You, Azad M.…
Exploring Alignment in Shared Cross-lingual Spacesby Basel Mousi, Nadir Durrani, Fahim Dalvi, Majd Hawasly, Ahmed…