Loading Now

Summary of Llavidal: a Large Language Vision Model For Daily Activities Of Living, by Dominick Reilly et al.


LLAVIDAL: A Large LAnguage VIsion Model for Daily Activities of Living

by Dominick Reilly, Rajatsubhra Chakraborty, Arkaprava Sinha, Manish Kumar Govind, Pu Wang, Francois Bremond, Le Xue, Srijan Das

First submitted to arxiv on: 13 Jun 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
This paper addresses the limitations of current Large Language Vision Models (LLVMs) in understanding Activities of Daily Living (ADL). Despite performing well on general video understanding, LLVMs struggle with fine-grained details, complex human-object interactions (HOI), and view-invariant representation learning. The authors propose a semi-automated framework for curating ADL datasets, introducing ADL-X, a multiview, multimodal RGBS instruction-tuning dataset. They also introduce LLAVIDAL, an LLVM integrating videos, 3D skeletons, and HOIs to model ADL’s complex spatiotemporal relationships. The authors develop a Multimodal Progressive (MMPro) training strategy, incorporating modalities in stages following a curriculum. To evaluate LLVM performance, the authors establish ADL MCQ and video description benchmarks. Trained on ADL-X, LLAVIDAL achieves state-of-the-art performance across ADL benchmarks.
Low GrooveSquid.com (original content) Low Difficulty Summary
This paper helps computers better understand daily activities like cooking or doing laundry. Right now, computers are good at understanding general videos, but they struggle with small details, complex actions between people and objects, and remembering what’s happening in a video regardless of the camera angle. To improve this, the authors create a special dataset called ADL-X that includes many different views and modalities to help computers learn. They also develop a new computer model called LLAVIDAL that combines videos with 3D skeletons and actions between people and objects. The authors test their model on different benchmarks and show that it performs better than other models.

Keywords

» Artificial intelligence  » Instruction tuning  » Representation learning  » Spatiotemporal