Summary of Framequant: Flexible Low-bit Quantization For Transformers, by Harshavardhan Adepu et al.
FrameQuant: Flexible Low-Bit Quantization for Transformersby Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas SinghFirst submitted…
FrameQuant: Flexible Low-Bit Quantization for Transformersby Harshavardhan Adepu, Zhanpeng Zeng, Li Zhang, Vikas SinghFirst submitted…
The Impact of Quantization on the Robustness of Transformer-based Text Classifiersby Seyed Parsa Neshaei, Yasaman…
Considering Nonstationary within Multivariate Time Series with Variational Hierarchical Transformer for Forecastingby Muyao Wang, Wenchao…
Tune without Validation: Searching for Learning Rate and Weight Decay on Training Setsby Lorenzo Brigato,…
Denoising Autoregressive Representation Learningby Yazhe Li, Jorg Bornschein, Ting ChenFirst submitted to arxiv on: 8…
Efficient High-Resolution Time Series Classification via Attention Kronecker Decompositionby Aosong Feng, Jialin Chen, Juan Garza,…
CRM: Single Image to 3D Textured Mesh with Convolutional Reconstruction Modelby Zhengyi Wang, Yikai Wang,…
Impacts of Color and Texture Distortions on Earth Observation Data in Deep Learningby Martin Willbo,…
Exploring the Influence of Dimensionality Reduction on Anomaly Detection Performance in Multivariate Time Seriesby Mahsun…
RATSF: Empowering Customer Service Volume Management through Retrieval-Augmented Time-Series Forecastingby Tianfeng Wang, Gaojie CuiFirst submitted…