iVR Integrated Vision and Radio Localization with Zero Human Effort - - PowerPoint PPT Presentation

ivr
SMART_READER_LITE
LIVE PREVIEW

iVR Integrated Vision and Radio Localization with Zero Human Effort - - PowerPoint PPT Presentation

iVR Integrated Vision and Radio Localization with Zero Human Effort Jingao Xu*, Hengjie Chen*, Kun Qian , Erqun Dong*, Min Sun* Chenshu Wu , Zheng Yang* *School of Software and BNRist, Tsinghua University University of California, San


slide-1
SLIDE 1

iVR

Integrated Vision and Radio Localization with Zero Human Effort

Jingao Xu*, Hengjie Chen*, Kun Qian†, Erqun Dong*, Min Sun* Chenshu Wu‡, Zheng Yang*

*School of Software and BNRist, Tsinghua University

† University of California, San Diego ‡ University of Maryland, College Park

September 12, London, UK

slide-2
SLIDE 2

Motivation

  • Various location-based ubiquitous applications.
  • Locating or tracking with Wi-Fi & IMU

– Ubiquitous: Almost everywhere installed infrastructure. – Low-cost: Off-the-shelf Wi-Fi devices and Inertial Measurement Unit. – Non-invasive: not required to wear/carry any special devices. – Suffer from both large location errors and considerable deployment costs.

2

Indoor Location Navigation PoI discovery

slide-3
SLIDE 3

Motivation

  • New opportunity: fusing Vision and Radio

– Surveillance cameras are pervasively deployed in public areas – High accuracy localization and tracking – Low start-up efforts

3

PHADE Ubicomp’18 TAR Mobisys’18 EV-loc TMC’15

slide-4
SLIDE 4

Motivation

  • Simply fusing Vision and Radio is not a guarantee for high accuracy

and zero human effort

– Absence of absolute location – Incorrespondence of identification – Looseness of sensor fusion

4

slide-5
SLIDE 5

Motivation

  • Simply fusing Vision and Radio is not a guarantee for high accuracy

and zero human effort

5

Absence

  • f absolute location

Incorrespondence

  • f identification

Looseness

  • f sensor fusion

Automatic map construction Tightly coupled sensor fusion method

iVR =

+

slide-6
SLIDE 6

L 2 L 1

A ut

  • m at

i c M ap C onst r uct i

  • n

P edest r i an D et ect i

  • n

I m age at t i m est am p: t

1

I m age at t i m est am p: t

2

S i m ul t aneous i m ages

R S S C ol l ect i

  • n

I M U S ensor S am pl i ng W i r el ess I ndoor Local i zat i

  • n

P edest r i an D ead R eckoni ng

R S S S am pl es A cc. D at a G yr . D at a

I m age- M ap P r

  • j

ect i

  • n

t

1

t

2

t

1

t

2

t

1

A ugm ent ed P ar t i cl e Fi l t er

t

2

t

1

t

2

t

1

t

2

t

1

t

2

Tr aj ect

  • r

i es

Local i ze and Tr ack P edest r i ans

I ndoor M ap I m age- M ap P r

  • j

ect i

  • n

M at r i x

V i V i deo f r a r am e m es W i r e r el ess ss si si gnal s I M U M U r e r eadi ngs Fusi si

  • n

I ni ni t i al al i zat i

  • n

P ha hase se Lo Local i zat i

  • n

n P ha hase se

System Overview

6

slide-7
SLIDE 7

Automatic Map Construction

  • Overview

– Input: Images captured from a couple of ambient surveillance cameras – Output: Indoor map(floorplan) and Projection matrix – Key algorithm: Binocular Stereo Vision + SfM Calibration

7

S f M C al i brat i

  • n

F eat ure P oi nt s E xt ract i

  • n

E qui val ent I m age A cqui si t i

  • n

Feat ur e P oi nt s C or r espondence R el at i ve P ose C al cul at i

  • n

I m age 1 I m age 2 E qui val ent

B i nocul ar S t ereo V i si

  • n

2D I ndoor M ap C onst ruct i

  • n

P roj ect i

  • n

M at ri x C al cul at i

  • n

Unparallel

slide-8
SLIDE 8

Automatic Map Construction

  • SfM Calibration

8

Relative Pose Calculation by SfM Equivalent Virtual Image Generation

slide-9
SLIDE 9

Automatic Map Construction

  • Binocular Stereo Vision

– Location:

9

slide-10
SLIDE 10

Automatic Map Construction

  • Map Construction and Projection Matrix Calculation

– Projection Matrix (T) – Map construction

  • Outlining clusters of projections of

feature points using Indoor Geometric Reasoning (IGR) algorithm

10

Location in world-coordinate Location in image-coordinate

slide-11
SLIDE 11

Tightly-coupled Multimodal Fusion

  • Augmented Particle Filter

– Input

  • Detection with Vision
  • Localization with Wireless signal
  • Pedestrian dead-reckoning with IMU

– Output

  • Localization and tracking result for different pedestrian at fine-grained.

11

Vision Modal input Wireless Modal input IMU Modal input System

  • utput
slide-12
SLIDE 12

Tightly-coupled Multimodal Fusion

  • Augmented Particle Filter

12

Particle movement indicated by IMU Particle weight assignment by wireless Particle weight assignment by vision

slide-13
SLIDE 13

Experiment

  • Experimental Scenarios

– Dataset & groundtruth: https://github.com/xujingao13/iVR

13

slide-14
SLIDE 14

Experiment

  • Performance of Automatic Map Construction

– iVR also displays obstacles in environment, which will promote the rationality of localization results

14

Automatically constructed map by iVR Floorplan provided by Administrator

slide-15
SLIDE 15

Experiment

  • Overall performance

15

Localization accuracy compared with state-of-the-art methods Tracking accuracy compared with state-of-the-art methods

  • Average:0.65m
  • 95th: 1.23m
  • Average:0.75m
  • 95th: 1.86m
slide-16
SLIDE 16

Experiment

  • Performance under different conditions

16

Different environment Different frame rate Different Pedestrians Different device placements

slide-17
SLIDE 17

Demo Video

17

slide-18
SLIDE 18

Contribution

  • We design an automatic indoor semantic map construction method

based on merely a couple of ambient stationary cameras.

  • We propose a novel augmented particle filter algorithm that tightly

couples measurements from multiple orthogonal systems, including vision, radio, and IMU, and jointly estimates a target’s location with enhanced accuracy and individual label

  • We prototype iVR and conduct extensive experiments in 5 scenarios.

The result shows that iVR outperforms existing state-of-the-art systems by 70%.

– The dataset (contains > 60k video frames and labeled ground truth) can be found at https://github.com/xujingao13/iVR

18

slide-19
SLIDE 19

19

Jingao Xu Tsinghua University

xujingao13@gmail.com

slide-20
SLIDE 20

Motivation

  • Major Problems

– Labor intensive site-survey of Wi-Fi localization methods – Drift error for IMU based tracking algorithms (PDR) – Rationality of localization results

20

Site-survey Drift error Location rationality

slide-21
SLIDE 21

Conclusion

  • Automatic indoor map construction

– Only need a couple of cameras – Present semantic information – To the best of our knowledge, this is the first work that constructs a physical map by using stationary surveillance cameras with unparalleled optical axes.

  • Tightly coupled sensor fusion method

– Multimodal localization and tracking – Tightly coupled multimodal fusion – Augmented particle filter

21