Loading Now

Summary of Scaling Wearable Foundation Models, by Girish Narayanswamy et al.


Scaling Wearable Foundation Models

by Girish Narayanswamy, Xin Liu, Kumar Ayush, Yuzhe Yang, Xuhai Xu, Shun Liao, Jake Garrison, Shyam Tailor, Jake Sunshine, Yun Liu, Tim Althoff, Shrikanth Narayanan, Pushmeet Kohli, Jiening Zhan, Mark Malhotra, Shwetak Patel, Samy Abdel-Ghaffar, Daniel McDuff

First submitted to arxiv on: 17 Oct 2024

Categories

  • Main: Machine Learning (cs.LG)
  • Secondary: Artificial Intelligence (cs.AI); Human-Computer Interaction (cs.HC)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The paper explores the potential of generative modeling in processing large volumes of data from wearable sensors. It presents a multimodal foundation model called LSM, built on a massive dataset of over 40 million hours of health-related sensor readings from more than 165,000 people. The model is designed to scale across compute, data, and model size, allowing it to perform tasks such as imputation, interpolation, and extrapolation with high accuracy. Additionally, the paper demonstrates how LSM enables efficient learning for downstream applications like exercise and activity recognition.
Low GrooveSquid.com (original content) Low Difficulty Summary
Wearable devices collect lots of health data, but making sense of this information is tricky. Inspired by AI models that learn from big texts or images, researchers created a new model called LSM to handle these data. They used an enormous dataset with readings from over 165,000 people to train the model. The results show that LSM can do things like fill in missing data and predict future readings. This means it could help us recognize when someone is exercising or doing other physical activities.

Keywords

* Artificial intelligence  * Activity recognition