I-see-3D ! An interactive and immersive system that dynamically - - PowerPoint PPT Presentation

i see 3d
SMART_READER_LITE
LIVE PREVIEW

I-see-3D ! An interactive and immersive system that dynamically - - PowerPoint PPT Presentation

I-see-3D ! An interactive and immersive system that dynamically adapts 2D projections to the location of a users eyes S. Pi erard, V. Pierlot, A. Lejeune and M. Van Droogenbroeck INTELSIG, Montefiore Institute, University of Li` ege,


slide-1
SLIDE 1

I-see-3D !

An interactive and immersive system that dynamically adapts 2D projections to the location of a user’s eyes

  • S. Pi´

erard, V. Pierlot, A. Lejeune and M. Van Droogenbroeck

INTELSIG, Montefiore Institute, University of Li` ege, Belgium

IC3D – December, 5th 2012

1 / 18

slide-2
SLIDE 2

Outline

1

Introduction

2

Method

3

Results

4

Conclusion

2 / 18

slide-3
SLIDE 3

Outline

1

Introduction

2

Method

3

Results

4

Conclusion

3 / 18

slide-4
SLIDE 4

Trompe-l’œils give the illusion of 3D at one viewpoint

This is the work of the artist Julian Beever.

4 / 18

slide-5
SLIDE 5

Our goal

Our non-intrusive system projects a large trompe-l’œil on the floor, with head-coupled perspective. It gives the illusion of a 3D immersive and interactive environment with 2D projectors.

◮ The user does not need to wear glasses, nor to watch a screen ◮ The user can move freely within the virtual environment ◮ Several range sensors are used (scanners, kinects) ◮ Multiple projectors can be used to cover the whole area

5 / 18

slide-6
SLIDE 6

Some cues from which we can infer 3D

3D = { scene structure, depth, thickness, occlusions, . . . } Cues:

◮ perspective ◮ lighting (reflections, shadows, . . . ) ◮ motion of the observer and objects ◮ knowledge (familiar objects: geometry, typical size) ◮ stereoscopy ◮ . . .

6 / 18

slide-7
SLIDE 7

Some previous systems with head-coupled perspective

The rendered images depend on the user’s viewpoint.

Cruz-Neira et al., 1992 Lee, 2008 Francone et al., 2011

In those works, the surfaces (screens or walls) are rectangular. There is no deformation between the computed images and those

  • n the surfaces. A surface is a“window”on the virtual world. The

projection is perspective. In our system, there is a deformation between the computed image and the one on the floor. We take into account the parameters of the projectors. The projection is not perspective.

7 / 18

slide-8
SLIDE 8

Outline

1

Introduction

2

Method

3

Results

4

Conclusion

8 / 18

slide-9
SLIDE 9

We use multiple sensors to estimate the head position

The selected non-intrusive sensors behave perfectly in darkness:

◮ low-cost range cameras (kinects) are placed around the scene ◮ several range laser scanners observe an horizontal plane

located 15 cm above the floor.

9 / 18

slide-10
SLIDE 10

The non-intrusive head localization procedure

single head location hypothesis = * kinect 1 kinect 2 laser scanner 1 laser scanner 2 pose recovery pose recovery validation gate (rejection of outliers) data fusion and analysis Kalman filter head position estimation ** ** ** * * * head position estimation and uncertainty multiple head location hypothesis = **

The filter has been optimized in order to minimize the variance of its output while keeping the bias (delay) in an acceptable range. We use the constant white noise acceleration (CWNA) model.

10 / 18

slide-11
SLIDE 11

The head-coupled projection

w image plane projector (u, v) (xf , yf , zf = 0) virtual point (x, y, z) (xh, yh, zh) z projector calibration head localization system upper clipping plane lower clipping plane 1     

s s′ u s s′ v s s′ w s s′

     =     

m1,1 m1,2 m1,4 m2,1 m2,2 m2,4 min (s) m3,1 m3,2 m3,4

         

zh −xh zh −yh

zh−max(z) max(z)

−1 zh

         

x y z 1

    

11 / 18

slide-12
SLIDE 12

Implementation

◮ We use the OpenGL and OpenNI libraries.

→ OpenNI → (3D pose recovery)

◮ Our system can be implemented without any shader. ◮ We take care of the clipping planes (limited viewing volume). ◮ The method is accurate to the pixel (the images are rendered

directly in the projector’s image plane).

◮ The virtual lights are placed at the real lights locations. ◮ The shadows are rendered using Carmack’s reverse algorithm.

12 / 18

slide-13
SLIDE 13

Outline

1

Introduction

2

Method

3

Results

4

Conclusion

13 / 18

slide-14
SLIDE 14

A first video taken from the user’s viewpoint

14 / 18

slide-15
SLIDE 15

Another video taken from an external viewpoint

15 / 18

slide-16
SLIDE 16

Outline

1

Introduction

2

Method

3

Results

4

Conclusion

16 / 18

slide-17
SLIDE 17

Conclusions

Our system gives the illusion of 3D to a single user. A virtual scene is projected all around him on the floor with head-coupled

  • perspective. The user can walk freely in the virtual world and

interact with it directly.

◮ Multiple sensors are used in order to recover the head

  • position. The estimation is provided by a Kalman filter.

◮ The selected sensors behave perfectly in total darkness, and

the user does not need to wear anything.

◮ The whole system (sensors and projectors) can be calibrated

in less than 10 minutes.

◮ The projection is neither orthographic nor perspective.

The rendering method is accurate to the pixel: the images are rendered directly in the projector’s image plane.

17 / 18

slide-18
SLIDE 18

How to cite this work

  • S. Pi´

erard, V. Pierlot, A. Lejeune, and M. Van Droogenbroeck. I-see-3D! An interactive and immersive system that dynamically adapts 2D projections to the location of a user’s eyes. In International Conference on 3D Imaging (IC3D), Li` ege, Belgium, December 2012.

18 / 18