Loading Now

Summary of A Visual-inertial Localization Algorithm Using Opportunistic Visual Beacons and Dead-reckoning For Gnss-denied Large-scale Applications, by Liqiang Zhang et al.


A Visual-inertial Localization Algorithm using Opportunistic Visual Beacons and Dead-Reckoning for GNSS-Denied Large-scale Applications

by Liqiang Zhang, Ye Tian, Dongyan Wei

First submitted to arxiv on: 29 Nov 2024

Categories

  • Main: Computer Vision and Pattern Recognition (cs.CV)
  • Secondary: Machine Learning (cs.LG); Signal Processing (eess.SP)

     Abstract of paper      PDF of paper


GrooveSquid.com Paper Summaries

GrooveSquid.com’s goal is to make artificial intelligence research accessible by summarizing AI papers in simpler terms. Each summary below covers the same AI paper, written at different levels of difficulty. The medium difficulty and low difficulty versions are original summaries written by GrooveSquid.com, while the high difficulty version is the paper’s original abstract. Feel free to learn from the version that suits you best!

Summary difficulty Written by Summary
High Paper authors High Difficulty Summary
Read the original abstract here
Medium GrooveSquid.com (original content) Medium Difficulty Summary
The proposed low-cost visual-inertial positioning solution for smart cities utilizes a lightweight multi-scale group convolution (MSGC)-based visual place recognition (VPR) neural network, pedestrian dead reckoning (PDR) algorithm, and visual/inertial fusion approach based on a Kalman filter with gross error suppression. This conditional observation corrects errors accumulated through PDR, ensuring reliable long-term positioning in GNSS-denied areas. Experimental results demonstrate stable positioning during large-scale movements, outperforming MobileNetV3-based VPR while reducing parameters by 63.37%. The VPR-PDR algorithm improves localization accuracy by over 40% compared to original PDR.
Low GrooveSquid.com (original content) Low Difficulty Summary
The paper proposes a new way to help people navigate in cities using their phones and cameras. It’s called visual-inertial positioning, which means using both what you see (visual) and how you move (inertial) to figure out where you are. This is useful because GPS signals can be blocked by tall buildings or trees in cities. The new method uses a special type of neural network that looks at patterns in the things people see, and then combines this with how much they have moved. It works well even when there’s no GPS signal.

Keywords

» Artificial intelligence  » Neural network