Summary of Moe-ct: a Novel Approach For Large Language Models Training with Resistance to Catastrophic Forgetting, by Tianhao Li et al.
MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgettingby Tianhao…
MoE-CT: A Novel Approach For Large Language Models Training With Resistance To Catastrophic Forgettingby Tianhao…
Large Language Models Are Involuntary Truth-Tellers: Exploiting Fallacy Failure for Jailbreak Attacksby Yue Zhou, Henry…
MathCAMPS: Fine-grained Synthesis of Mathematical Problems From Human Curriculaby Shubhra Mishra, Gabriel Poesia, Belinda Mo,…
Visual Reasoning and Multi-Agent Approach in Multimodal Large Language Models (MLLMs): Solving TSP and mTSP…
Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMsby Sukmin Yun, Haokun Lin,…
Analyzing Quality, Bias, and Performance in Text-to-Image Generative Modelsby Nila Masrourisaadat, Nazanin Sedaghatkish, Fatemeh Sarshartehrani,…
The Qiyas Benchmark: Measuring ChatGPT Mathematical and Language Understanding in Arabicby Shahad Al-Khalifa, Hend Al-KhalifaFirst…
Can GPT-4 Help Detect Quit Vaping Intentions? An Exploration of Automatic Data Annotation Approachby Sai…
Evaluating Human Alignment and Model Faithfulness of LLM Rationaleby Mohsen Fayyaz, Fan Yin, Jiao Sun,…
SemUV: Deep Learning based semantic manipulation over UV texture map of virtual human headsby Anirban…