Summary of Miss: a Generative Pretraining and Finetuning Approach For Med-vqa, by Jiawei Chen et al.
MISS: A Generative Pretraining and Finetuning Approach for Med-VQAby Jiawei Chen, Dingkang Yang, Yue Jiang,…
MISS: A Generative Pretraining and Finetuning Approach for Med-VQAby Jiawei Chen, Dingkang Yang, Yue Jiang,…
Know Your Needs Better: Towards Structured Understanding of Marketer Demands with Analogical Reasoning Augmented LLMsby…
CoT-Driven Framework for Short Text Classification: Enhancing and Transferring Capabilities from Large to Smaller Modelby…
The intrinsic motivation of reinforcement and imitation learning for sequential tasksby Sao Mai NguyenFirst submitted…
On the Compositional Generalization of Multimodal LLMs for Medical Imagingby Zhenyang Cai, Junying Chen, Rongsheng…
Toward Adaptive Reasoning in Large Language Models with Thought Rollbackby Sijia Chen, Baochun LiFirst submitted…
Why Train Everything? Tint a Single Layer for Multi-task Model Mergingby Aecheon Jung, Seunghwan Lee,…
MTCAE-DFER: Multi-Task Cascaded Autoencoder for Dynamic Facial Expression Recognitionby Peihao Xiang, Kaida Wu, Chaohao Lin,…
Optimizing Large Language Models with an Enhanced LoRA Fine-Tuning Algorithm for Efficiency and Robustness in…
Collaborative Optimization in Financial Data Mining Through Deep Learning and ResNeXtby Pengbin Feng, Yankaiqi Li,…