Video Fields: Fusing Multiple Surveillance Videos into a Dynamic - - PowerPoint PPT Presentation

video fields fusing multiple surveillance videos into a
SMART_READER_LITE
LIVE PREVIEW

Video Fields: Fusing Multiple Surveillance Videos into a Dynamic - - PowerPoint PPT Presentation

Video Fields: Fusing Multiple Surveillance Videos into a Dynamic Virtual Environment Ruofei Du, Sujal Bista, Amitabh Varshney The Augmentarium| UMIACS | University of Maryland, College Park {ruofei, sujal, varshney} @ cs.umd.edu


slide-1
SLIDE 1
slide-2
SLIDE 2

Video Fields: Fusing Multiple Surveillance Videos into a Dynamic Virtual Environment

Ruofei Du, Sujal Bista, Amitabh Varshney

The Augmentarium| UMIACS | University of Maryland, College Park {ruofei, sujal, varshney} @ cs.umd.edu www.VideoFields.com

slide-3
SLIDE 3

image courtesy: university of maryland, college park

Introduction

Surveillance Videos - Monitoring

slide-4
SLIDE 4

image courtesy: www.icsc.org

Introduction

Surveillance Videos – Shopping Centers

slide-5
SLIDE 5

image courtesy: wikipedia

Introduction

Surveillance Videos - Airports

slide-6
SLIDE 6

image courtesy: wikipedia

Introduction

Surveillance Videos – Train stations

slide-7
SLIDE 7

image courtesy: university of maryland, college park

Introduction

Surveillance Videos - Campuses

slide-8
SLIDE 8

image courtesy: university of maryland, college park

Introduction

Surveillance Videos - Conventional

slide-9
SLIDE 9

image courtesy: theimaginativeconservative.org

Introduction

Surveillance Videos – Cognitive Burden

slide-10
SLIDE 10

image courtesy: university of maryland, college park

Introduction

Surveillance Videos – Fusing & Interpreting

slide-11
SLIDE 11

Related Work

Fusing Multiple Static Photographs

slide-12
SLIDE 12

Related Work

Fusing Multiple Static Photographs

slide-13
SLIDE 13

Related Work

Fusing Multiple Static Photographs

slide-14
SLIDE 14

Related Work

Fusing Multiple Static Photographs

slide-15
SLIDE 15

Related Work

Fusing Multiple Static Photographs

slide-16
SLIDE 16
slide-17
SLIDE 17

Related Work

Fusing Multiple Dynamic Videos

slide-18
SLIDE 18

Related Work

Fusing Multiple Dynamic Videos

RGB

slide-19
SLIDE 19

Related Work

Fusing Multiple Dynamic Videos

RGB RGBD

slide-20
SLIDE 20

Related Work

Fusing Multiple Dynamic Videos

slide-21
SLIDE 21

Related Work

Fusing Multiple Dynamic Videos

slide-22
SLIDE 22

Related Work

Fusing Multiple Dynamic Videos

slide-23
SLIDE 23

Related Work

Fusing Multiple Dynamic Videos

slide-24
SLIDE 24

Related Work

Fusing Multiple Dynamic Videos

slide-25
SLIDE 25

Related Work

Fusing Multiple Dynamic Videos

slide-26
SLIDE 26

Related Work

Fusing Multiple Dynamic Videos

slide-27
SLIDE 27

Related Work

Fusing Multiple Dynamic Videos

slide-28
SLIDE 28

Related Work

Fusing Multiple Dynamic Videos

SIGGRAPH 2016

Wednesday, 3:30-4:00 PM

slide-29
SLIDE 29

Related Work

Fusing Multiple Dynamic Videos

slide-30
SLIDE 30

Related Work

Fusing Multiple Dynamic Videos

slide-31
SLIDE 31

Related Work

Fusing Multiple Dynamic Videos

slide-32
SLIDE 32

Our Approach?

slide-33
SLIDE 33

Video Fields

slide-34
SLIDE 34

Video Fields

slide-35
SLIDE 35

Introduction

Video Field

slide-36
SLIDE 36
slide-37
SLIDE 37

Introduction

Video Field

slide-38
SLIDE 38
slide-39
SLIDE 39

Conception, architecting & implementation Video Fields

A mixed reality system that fuses multiple surveillance videos into an immersive virtual environment,

slide-40
SLIDE 40

Integrating automatic segmentation of moving entities Video Fields Rendering

Real-time fragment shader processing

slide-41
SLIDE 41

Two algorithms to fuse multiple videos Early & deferred pruning

These methods use voxels and meshes respectively to render moving entities in the video fields

slide-42
SLIDE 42

Achieving cross-platform compatibility by WebGL + Three.js

smartphones, tablets, desktop, high-resolution large-area wide field of view tiled display walls, as well as head-mounted displays.

slide-43
SLIDE 43

System Overview

slide-44
SLIDE 44

Architecture

Video Fields Flowchart

slide-45
SLIDE 45

Architecture

Video Fields Flowchart

slide-46
SLIDE 46

Architecture

Video Fields Flowchart

slide-47
SLIDE 47
slide-48
SLIDE 48

Architecture

Video Fields Flowchart

slide-49
SLIDE 49

Background Modeling

Motivation

  • Provide a background texture for each camera
  • Identify moving entities in the rendering stage
  • Reduce the network bandwidth requirements
slide-50
SLIDE 50

Background Modeling

Gaussian Mixture Models (GMM)

slide-51
SLIDE 51

Background Modeling

Advantages [Stauffer and Grimson]

More adaptive with:

  • different lighting conditions,
  • repetitive motions of scene elements,
  • moving entities in slow motion
slide-52
SLIDE 52

Architecture

Video Fields Flowchart

slide-53
SLIDE 53

Segmentation

Moving Entities

slide-54
SLIDE 54

Background Modeling

Gaussian Mixture Models (GMM)

slide-55
SLIDE 55
slide-56
SLIDE 56

Architecture

Video Fields Flowchart

slide-57
SLIDE 57

Visibility Test

Plus Opacity Modulation

slide-58
SLIDE 58
slide-59
SLIDE 59

Architecture

Video Fields Flowchart

slide-60
SLIDE 60

Video Fields Mapping

Overview

slide-61
SLIDE 61

Video Fields Mapping

Challenges

  • 1. Vertex in the 3D models -> Pixel in the texture space
  • 2. Pixel in the texture space -> Vertex on the ground
  • The second is useful for projecting a 2D segmentation
  • f a moving entity to the 3D world
slide-62
SLIDE 62

Video Fields Mapping

Projection Mapping

slide-63
SLIDE 63

Video Fields Mapping

Perspective correction

slide-64
SLIDE 64

Video Fields Mapping

Depth Map / Hashing Function

slide-65
SLIDE 65

Early Pruning for Rendering Moving Entities

Voxels

slide-66
SLIDE 66

Deferred Pruning for Rendering Moving Entities

Billboards

slide-67
SLIDE 67

Visual Comparison

Early Pruning vs. Deferred Pruning

slide-68
SLIDE 68

View-dependent Rendering

slide-69
SLIDE 69

View-dependent Rendering

slide-70
SLIDE 70

View-dependent Rendering

slide-71
SLIDE 71

View-dependent Rendering

slide-72
SLIDE 72

Experimental Results

Early Pruning vs. Deferred Pruning

slide-73
SLIDE 73
slide-74
SLIDE 74

Experimental Results

Early Pruning vs. Deferred Pruning

slide-75
SLIDE 75

Experimental Results

Early Pruning vs. Deferred Pruning

slide-76
SLIDE 76

Visual Comparison

Early Pruning vs. Deferred Pruning

slide-77
SLIDE 77
slide-78
SLIDE 78

Future Work

Scale Up - Hundreds of cameras

slide-79
SLIDE 79

Future Work

Bandwidth Problem

slide-80
SLIDE 80

Future Work

Holoportation with RGB cameras

slide-81
SLIDE 81

Acknowledgement

Augmentarium Lab | GVIL | UMIACS

slide-82
SLIDE 82

Acknowledgement

NSF | Nvidia | MPower | UMIACS

slide-83
SLIDE 83

Video Fields

www.Video-Fields.com Thank you! Questions or comments? Ruofei Du and Amitabh Varshney

Augmentarium Lab | GVIL | UMIACS Web3D 2016