Summary of Spatio-temporal Side Tuning Pre-trained Foundation Models For Video-based Pedestrian Attribute Recognition, by Xiao Wang et al.
Spatio-Temporal Side Tuning Pre-trained Foundation Models for Video-based Pedestrian Attribute Recognition
by Xiao Wang, Qian Zhu, Jiandong Jin, Jun Zhu, Futian Wang, Bo Jiang, Yaowei Wang, Yonghong Tian
First submitted to arxiv on: 27 Apr 2024
Categories
- Main: Computer Vision and Pattern Recognition (cs.CV)
- Secondary: Artificial Intelligence (cs.AI); Computation and Language (cs.CL)
GrooveSquid.com Paper Summaries
GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!
Summary difficulty | Written by | Summary |
---|---|---|
High | Paper authors | High Difficulty Summary Read the original abstract here |
Medium | GrooveSquid.com (original content) | Medium Difficulty Summary A novel video-based pedestrian attribute recognition (PAR) framework is proposed to overcome the limitations of existing PAR algorithms, which are mainly developed based on static images and struggle with challenging scenarios like heavy occlusion and motion blur. The framework fine-tunes a pre-trained multi-modal foundation model efficiently, leveraging temporal information from video frames. Specifically, it formulates video-based PAR as a vision-language fusion problem, extracts visual features using a pre-trained CLIP model, and proposes a novel spatiotemporal side-tuning strategy for parameter-efficient optimization. The framework also utilizes text encoding to process attribute descriptions and fuse them with visual tokens through a Transformer. Extensive experiments on two large-scale datasets validate the effectiveness of the proposed framework. |
Low | GrooveSquid.com (original content) | Low Difficulty Summary A new way to recognize people’s attributes, like height or clothes, is developed by using videos instead of just looking at one picture. This helps when there are lots of distractions in the video, like things moving in the background. The method uses a special kind of AI model that can understand both pictures and words. It takes the words that describe what we want to recognize (like “tall” or “wearing sunglasses”) and combines them with the picture information. This helps the AI make more accurate predictions. The team tested this new way on two big collections of videos and it worked really well. |
Keywords
» Artificial intelligence » Multi modal » Optimization » Parameter efficient » Spatiotemporal » Transformer