Virtual Reality Modeling Virtual Reality Modeling from - - PowerPoint PPT Presentation

virtual reality modeling virtual reality modeling
SMART_READER_LITE
LIVE PREVIEW

Virtual Reality Modeling Virtual Reality Modeling from - - PowerPoint PPT Presentation

Electrical and Computer Engineering Dept. Virtual Reality Modeling Virtual Reality Modeling from http://www.okino.com/ Modeling Modeling & & VR Toolkits VR Toolkits System architecture The VR object modeling cycle: The VR object


slide-1
SLIDE 1

Virtual Reality Modeling Virtual Reality Modeling

Electrical and Computer Engineering Dept. from http://www.okino.com/

slide-2
SLIDE 2

System architecture Modeling Modeling & & VR Toolkits VR Toolkits

slide-3
SLIDE 3

The VR object modeling cycle: The VR object modeling cycle:

I/O mapping (drivers); Geometric modeling; Kinematics modeling; Physical modeling; Object behavior (intelligent agents); Model management.

slide-4
SLIDE 4

The VR modeling cycle

slide-5
SLIDE 5

The VR geometric modeling: The VR geometric modeling:

Object surface shape: polygonal meshes (vast majority); splines (for curved surfaces); Object appearance: Lighting (shading) texture mapping

slide-6
SLIDE 6

The surface polygonal (triangle) mesh The surface polygonal (triangle) mesh

Shared vertex Shared vertex Non Non-

  • shared vertex

shared vertex (X (X0

0,Y

,Y0

0,Z

,Z0

0)

) (X (X1

1,Y

,Y1

1,Z

,Z1

1)

) (X (X2

2,Y

,Y2

2,Z

,Z2

2)

) (X (X3

3,Y

,Y3

3,Z

,Z3

3)

) (X (X4

4,Y

,Y4

4,Z

,Z4

4)

) (X (X5

5,Y

,Y5

5,Z

,Z5

5)

)

Triangle meshes are preferred since they are memory and computationally efficient (shared vertices)

slide-7
SLIDE 7

Object Object spline spline-

  • based shape:

based shape:

Another way of representing virtual objects;

Functions are of higher degree than linear functions

describing a polygon – use less storage and provide increased surface smoothness.

Parametric splines are represented by points x(t), y(t),

z(t), t=[0,1] and a, b, c are constant coefficients.

slide-8
SLIDE 8

Object Object spline spline-

  • based shape:

based shape:

Parametric surfaces are extension of parametric splines

with point coordinates given by x(s,t), y(s,t), z(s,t), with s=[0,1] and t=[0,1]. β-Splines are controlled indirectly through four control points (more in physical modeling section)

slide-9
SLIDE 9

Object polygonal shape: Object polygonal shape:

Can be programmed from scratch using OpenGL or

  • ther toolkit editor; it is tedious and requires skill;

Can be obtained from CAD files;

Can be created using a 3-D digitizer (stylus), or a 3-D

scanner (tracker, cameras and laser);

Can be purchased from existing online databases

(Viewpoint database). Files have vertex location and connectivity information, but are static.

slide-10
SLIDE 10

CAD-file based models: Done using AutoCAD; Each moving part a separate file; Files need to be converted to formats compatible with VR toolkits; Advantage – use of preexisting models in manufacturing applications.

Geometric Modeling Geometric Modeling

slide-11
SLIDE 11

Geometric Modeling Geometric Modeling

Venus de Milo created Venus de Milo created using the using the HyperSpace HyperSpace 3D digitizer, 4200 textured 3D digitizer, 4200 textured polygons using polygons using NuGraph NuGraph toolkit toolkit

slide-12
SLIDE 12

Polhemus Polhemus 3 3-

  • D scanners:

D scanners:

Eliminate direct contact with object.

uses two cameras, a laser, and one magnetic trackers (two if

movable objects are scanned)

Scanning resolution 0.5 mm at 200 mm range; Scanning speed is 50 lines/sec; Range is 75-680 mm scanner-object range.

slide-13
SLIDE 13

Geometric Modeling Geometric Modeling

Polhemus Polhemus FastScan FastScan 3D scanner (can scan objects up to 3 m long). 3D scanner (can scan objects up to 3 m long).

slide-14
SLIDE 14

DeltaSphere DeltaSphere 3000 3D scanner 3000 3D scanner www.3rdtech.com Large models need large-volume Scanners; The 3rdTech scanners uses time-of-flight modulated laser beam to determine position. Features: Scanning range up to 40 ft; Resolution 0.01 in; accuracy 0.3 in; scan density of up to 7200 samples/360º; complete scene scanning in 10 – 30 minutes (scene has to be static);

  • ptional digital color camera (2008x1504 resolution) to add

color to models. Requires a second scan, and reduces elevation to 77º.

360 360º º horizontal horizontal 150 150º º elevation elevation electrical motor electrical motor and CPU and CPU

slide-15
SLIDE 15

DeltaSphere DeltaSphere 3000 3D scanner 3000 3D scanner

www.3rdtech.com

Polhemus Polhemus scanner scanner

Feature Polhemus scanner DeltaSphere scanner Range 0.56 m 14.6 m Resolution 0.5 mm @ 0.2 m 0.25 mm Control manual automatic Speed 50 lines/sec 25,000 samples/sec

slide-16
SLIDE 16

DeltaSphere DeltaSphere 3000 image 3000 image

www.3rdtech.com

slide-17
SLIDE 17

DeltaSphere DeltaSphere 3000 software 3000 software-

  • compensated image

compensated image

www.3rdtech.com

slide-18
SLIDE 18

Conversion of scanner data: Conversion of scanner data:

Scanners produce a dense “cloud” of vertices (x,y,z). Using such packages as Wrap (www.geomagic.com) the point data

is transformed into surface data (including editing and decimation) Point cloud Point cloud from scanner from scanner Polygonal mesh Polygonal mesh after decimation after decimation

slide-19
SLIDE 19

Polygonal Polygonal surface surface NURBS surface NURBS surface NURBS (non NURBS (non-

  • uniform

uniform rational rational β β-

  • splines

splines) patches ) patches

slide-20
SLIDE 20

Higher resolution model > 20,000 polygons. Low res. Model – 600 polygons

Geometric Modeling Geometric Modeling – – using online databases using online databases

slide-21
SLIDE 21

Geometric Modeling Geometric Modeling

slide-22
SLIDE 22

Object Visual Appearance Object Visual Appearance

Scene illumination (local or global);

Texture mapping; Multi-textures Use of textures to do illumination in the rasterizing

stage of the pipeline

slide-23
SLIDE 23

Scene illumination Scene illumination

Local methods (Flat shaded, Gouraud shaded,

Phong shaded) treat objects in isolation. They are computationally faster than global illumination methods;

Global illumination treats the influence of one

  • bject on another object’s appearance. It is more

demanding from a computation point of view but produces more realistic scenes.

slide-24
SLIDE 24

Phong Phong shading model shading model Flat shading model Flat shading model

Ip = Ib – (Ib- Ia) xb-xp xb-xa

Gouraud Gouraud shading model shading model Local illumination methods Local illumination methods

slide-25
SLIDE 25

Flat shaded Flat shaded Utah Teapot Utah Teapot Phong Phong shaded shaded Utah Teapot Utah Teapot

slide-26
SLIDE 26

Global scene illumination Global scene illumination

The inter-reflections and shadows cast by objects on each

  • ther.
slide-27
SLIDE 27

Radiosity Radiosity illumination illumination

Results in a more realistic looking scene Without radiosity With radiosity

slide-28
SLIDE 28

Radiosity Radiosity illumination illumination

… but until recently only for fly-through (geometry fixed). A second process was added so that scene geometry

can be altered

slide-29
SLIDE 29

Texture mapping Texture mapping

It is done in the rasterizer phase of the graphics

pipeline, by mapping assigning texture space coordinates to polygon vertices (or splines), then mapping these to pixel coordinates;

Texture increase scene realism; Texture provide better 3-D spatial cues (they are

perspective transformed);

They reduce the number of polygons in the scene –

increased frame rate (example – tree models).

slide-30
SLIDE 30

Textured room image for increased realism Textured room image for increased realism

from http://www.okino.com/

slide-31
SLIDE 31

How to create textures: How to create textures:

Models are available on line in texture

“libraries” of cars, people, construction materials, etc. Custom textures from scanned photographs or

Using an interactive paint program to create bitmaps

slide-32
SLIDE 32

Tree, higher resolution model 45,992 polygons.

VR Geometric Modeling VR Geometric Modeling

Tree represented as a texture 1 polygon, 1246x1280 pixels (www.imagecels.com).

slide-33
SLIDE 33

Multi Multi-

  • texturing:

texturing:

Several texels can be overlaid on one pixel;

A texture blending cascade is made up of a series of texture stages (from “Real Time Rendering) Interpolated Interpolated vertex values vertex values Texture Texture value 0 value 0 Stage 0 Stage 1 Stage 2 Texture Texture value 1 value 1 Texture Texture value 2 value 2 Polygon/ Polygon/ Image buffer Image buffer

slide-34
SLIDE 34

Allow more complex textures Allow more complex textures

Bump Bump maps maps Transparency Transparency texture texture Normal Normal texture texture Background Background texture texture Reflectivity Reflectivity texture texture

slide-35
SLIDE 35

Multi Multi-

  • texturing for bump mapping:

texturing for bump mapping:

Lighting effects caused by irregularities on object

surface are simulated through “bump mapping”;

This encodes surface irregularities as textures; No change in model geometry. No added

computations at the geometry stage;

Done as part of the per-pixel shading operations

  • f the NSR
slide-36
SLIDE 36

Multi Multi-

  • texturing for lighting:

texturing for lighting:

Several texels can be overlaid on one pixel;

One application in more realistic lighting; Polygonal lighting is real-time but requires lots of

polygons (triangles) for realistic appearance

Vertex lighting of low polygon count surface – lights are diffuse – tessellated. Vertex lighting of high polygon count surface – lights have realistic

  • appearance. High computation load

(from NVIDIA technical brief)

slide-37
SLIDE 37

Multi Multi-

  • texturing (texture blending):

texturing (texture blending):

Realistic-looking lighting can be done with

2-D textures called “light maps”;

Not applicable to real-time (need to be recomputed

when object moves)

Standard lighting map 2-D texture Light map texture overlaid

  • n top of wall texture. Realistic

and low polygon count. Not real-time!

(from NVIDIA technical brief)

slide-38
SLIDE 38

KINEMATICS MODELING: KINEMATICS MODELING:

Homogeneous transformation matrices; Object position; Transformation invariants; Object hierarchies; Viewing the 3-D world.

slide-39
SLIDE 39

Object Hierarchies: Object Hierarchies:

Allows models to be partitioned into a hierarchy,

and become dynamic;

Segments are either parents (higher level object)

  • r children (lower level objects).

The motion of a parent is replicated by its

children but not the other way around.

Example – the virtual human and the virtual

hand;

At the top of the hierarchy is the “world global

transformation” that determines the view to the scene.

slide-40
SLIDE 40

VR Kinematics Modeling VR Kinematics Modeling

a) b) Model hierarchy: a) static model (Viewpoint Datalabs); b) segmented model.

slide-41
SLIDE 41

T T global

global ← ← fingertip fingertip (t)

(t) T T W

W ← ←palm palm (t)

(t) T T 3

3← ← fingertip fingertip

T T 2

2← ← 3 3 (t)

(t) T T 1

1← ← 2 2 (t),

(t), T T palm

palm← ← 1 1 (t)

(t)

World system of coordinates World system of coordinates Camera system of coordinates Camera system of coordinates Receiver system Receiver system

  • f coordinates
  • f coordinates

Source system of coordinates Source system of coordinates

T T global

global ← ← fingertip fingertip (t) =

(t) = T T global

global ← ←W W(t)

(t) T T W

W ← ←source source T

T source

source ← ←palm palm (t)

(t) •

  • T

T palm

palm← ← 1 1 (t)

(t)T T 1

1← ← 2 2 (t)

(t) T T 2

2← ← 3 3 (t)

(t) T T 3

3← ← fingertip fingertip

T T global

global ← ←W W(t)

(t) T T W

W ← ←source source

z z x x y y

slide-42
SLIDE 42

Physical modeling

Physical characteristics of the object and the

way they change – Inertia, surface roughness & texture, compliance, (hard/soft) deformation mode (elastic/plastic)

Handled by the haptics rendering pipeline

(should be synchronized with the graphics pipeline)

slide-43
SLIDE 43

The The Haptics Haptics Rendering Pipeline (revisited) Rendering Pipeline (revisited)

Force Force Tactile Tactile Traversal Traversal Display Display

Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection View View Transform Transform Lighting Lighting Projection Projection Texturing Texturing Scene Scene Traversal Traversal

Geometry Geometry Rasterizer Rasterizer Application Application Display Display

adapted from (Popescu, 2001)

slide-44
SLIDE 44

The The Haptics Haptics Rendering Pipeline Rendering Pipeline

Force Force Tactile Tactile Traversal Traversal Display Display

Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection

Equivalent to haptic clipping

slide-45
SLIDE 45

Uses bounding box collision detection for fast response; Two types of bounding boxes, with fixed size or variable

size (depending on enclosed object orientation).

Fixed size is computationally faster, but less precise

Collision detection: Collision detection:

Variable size Bounding Box Variable size Bounding Box Fixed size Bounding Box Fixed size Bounding Box

slide-46
SLIDE 46

Collision response

Object deformation (if objects are non-

rigid)

Parametric surfaces vs polygonal meshes

slide-47
SLIDE 47

Surface cutting: Surface cutting:

An extreme case of surface “deformation” is surface cutting. This happens when the contact force exceed a given threshold;

When cutting, one vertex gets a co-located twin. Subsequently

the twin vertices separate based on spring/damper laws and the cut enlarges.

Cutting instrument Cutting instrument

Mesh Mesh before before cut cut Mesh Mesh after after cut cut

V2 V2 V1 V1 V2 V2 V1 V1

slide-48
SLIDE 48

Collision response Collision response – – surface deformation surface deformation

slide-49
SLIDE 49

The The Haptics Haptics Rendering Pipeline Rendering Pipeline

Force Force Tactile Tactile Traversal Traversal Display Display

Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection

slide-50
SLIDE 50

Haptic Interface Point Haptic Interface Point

Object polygon Object polygon

I I -

  • Haptic Interface Point

Haptic Interface Point Haptic Interface Point Haptic Interface Point

Penetration distance Penetration distance Haptic interface Haptic interface

slide-51
SLIDE 51

Force output for homogeneous elastic objects Force output for homogeneous elastic objects

K K •

  • d,

d,

for 0 for 0 ≤ ≤ d d ≤ ≤ d d max

max

F F max

max

for d for d max

max <

< d d

where where F

F max

max is that haptic interface maximum output force

is that haptic interface maximum output force

{ {

F = F =

F F max

max

d d max 1

max 1

Penetration distance d Penetration distance d d d max 2

max 2

H a r d

  • b

j e c t H a r d

  • b

j e c t Soft object Soft object

saturation saturation

slide-52
SLIDE 52

Force Calculation Force Calculation – – Elastic objects with harder interior Elastic objects with harder interior

K K 1

1 •

  • d,

d,

for 0 for 0 ≤ ≤ d d ≤ ≤ d d discontinuity

discontinuity

K K 1

1 •

  • d

d discontinuity

discontinuity + K

+ K 2

2 •

  • (d

(d – –d d discontinuity

discontinuity)

),

, for d

for d discontinuity

discontinuity ≤

≤ d d

where d where d discontinuity

discontinuity is object stiffness change point

is object stiffness change point

{ {

F = F =

Penetration distance d Penetration distance d d d discontinuity

discontinuity

F F

slide-53
SLIDE 53

F F r

r

F F

Force Calculation Force Calculation – – Virtual pushbutton Virtual pushbutton

F = K F = K 1

1 •

  • d (1

d (1-

  • u

um

m) +

) + F Fr

r •

  • u

um

m +

+ K K 2

2 •

  • (d

(d – – n) u n) un

n

where u where um

m and u

and un

n are unit step

are unit step functions at m and n functions at m and n

m m n n Penetration distance d Penetration distance d Virtual wall Virtual wall

slide-54
SLIDE 54

F F initial

initial = K

= K •

  • d for 0

d for 0 ≤ ≤ d d ≤ ≤ m m F F = 0 during relaxation, = 0 during relaxation, F F subsequent

subsequent = K

= K 1

1 •

  • d

d •

  • u

um

m for 0

for 0 ≤ ≤ d d ≤ ≤ n n F F = 0 during relaxation, = 0 during relaxation, where u where um

m is unit step function at m

is unit step function at m

m m m m n n

Force Calculation Force Calculation – – Plastic deformation Plastic deformation

slide-55
SLIDE 55

Generate energy due to sampling time Generate energy due to sampling time-

  • To avoid system instabilities we add a

To avoid system instabilities we add a damping term damping term

Force Calculation Force Calculation – – Virtual wall Virtual wall Insufficient stiffness Insufficient stiffness

V V < < 0 Virtual wall Virtual wall Moving into the wall Moving into the wall time time F F V V ≥ ≥ 0 Virtual wall Virtual wall Moving away from the wall Moving away from the wall time time F F

K K wall

wall •

  • Δ

Δ x + B v x + B v,

, for v

for v < < 0

K K wall

wall •

  • Δ

Δ x, x, for v

for v ≥ ≥ 0 where B is a directional damper where B is a directional damper

{ {

F = F =

slide-56
SLIDE 56

Wallness: crispness of initial contact,

cleanliness of the final release.

slide-57
SLIDE 57

The The Haptics Haptics Rendering Pipeline Rendering Pipeline

Force Force Tactile Tactile Traversal Traversal Display Display

Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection

slide-58
SLIDE 58

where N is the direction of the contact force based on vertex normal interpolation

Force shading: Force shading:

Real cylinder contact Real cylinder contact forces forces Non Non-

  • shaded contact

shaded contact forces forces Contact forces after Contact forces after shading shading

K K object

  • bject •
  • d

d •

  • N

N ,

, for 0

for 0 ≤ ≤ d d ≤ ≤ d d max

max

F F max

max •

  • N,

N, for d

for d max

max < d

< d

{ {

F F smoothed

smoothed =

=

slide-59
SLIDE 59

The haptic mesh: The haptic mesh:

A single HIP is not sufficient to capture the geometry

  • f fingertip-object contact as in a haptic glove;

The curvature of the fingertip, and the object

deformation need to be realistically modeled.

Screen sequence for squeezing an elastic virtual ball Screen sequence for squeezing an elastic virtual ball

slide-60
SLIDE 60

Penetration distance Penetration distance for mesh point for mesh point i Mesh point Mesh point i

Haptic mesh Haptic mesh

slide-61
SLIDE 61

Haptic Interface Point Haptic Interface Point i Penetration distance Penetration distance for mesh point for mesh point i

For each haptic interface point of the mesh For each haptic interface point of the mesh:

: F F haptic

haptic-

  • mesh i

mesh i = K

= K object

  • bject •
  • d

d mesh i

mesh i •

  • N

N surface

surface

where where d

d mesh i

mesh i are the interpenetrating distances at the

are the interpenetrating distances at the mesh points, mesh points, N

N surface

surface is the weighted surface normal

is the weighted surface normal

  • f the contact polygon
  • f the contact polygon

Haptic mesh force calculation Haptic mesh force calculation

slide-62
SLIDE 62

The The Haptics Haptics Rendering Pipeline Rendering Pipeline

Force Force Tactile Tactile Traversal Traversal Display Display

Force Force Calculation Calculation Force Force Smoothing Smoothing Haptic Haptic Texturing Texturing Collision Collision Detection Detection Force Force Mapping Mapping

slide-63
SLIDE 63

Force mapping Force mapping

Force displayed by the Rutgers Master interface: Force displayed by the Rutgers Master interface: F F displayed

displayed = (

= (Σ Σ F F haptic

haptic-

  • mesh

mesh )

)•

  • cos

cosθ θ where where θ θ it the angle between the mesh force resultant and the piston it the angle between the mesh force resultant and the piston

slide-64
SLIDE 64

The The Haptics Haptics Rendering Pipeline Rendering Pipeline

Force Force Tactile Tactile Traversal Traversal Display Display

Force Force Calculation Calculation Force Force Smoothing Smoothing Force Force Mapping Mapping Haptic Haptic Texturing Texturing Collision Collision Detection Detection

slide-65
SLIDE 65

Tactile mouse Tactile mouse Forces only in the z direction Forces only in the z direction Tactile patterns produced by the Logitech mouse Tactile patterns produced by the Logitech mouse

Force Force Time Time → → Time Time → → Time Time → →

slide-66
SLIDE 66

haptic mouse texture simulation Textures can change according to movement direction: velvet

slide-67
SLIDE 67

Surface Surface haptic haptic texture produced by the texture produced by the PHANToM PHANToM interface interface

Forces in all directions Friction simulation: force analogous to

normal force

Viscosity: force analogous to velocity Inertia: m*a

slide-68
SLIDE 68

Surface haptic texture produced by the Surface haptic texture produced by the PHANToM PHANToM interface interface

Equivalent to displacement (bump) map

slide-69
SLIDE 69

Haptic interface Haptic interface

Haptic Interface Point Haptic Interface Point

Object polygon Object polygon

X X Y Y Z Z

F F texture

texture = A sin(m x)

= A sin(m x) •

  • sin(n y),

sin(n y),

where A, m, n are constants: where A, m, n are constants:

  • A gives magnitude of vibrations;

A gives magnitude of vibrations;

  • m and n modulate the frequency of

m and n modulate the frequency of vibrations in the x and y directions vibrations in the x and y directions

  • F can be perceived as shape or

F can be perceived as shape or friction friction

slide-70
SLIDE 70

BEHAVIOR MODELING BEHAVIOR MODELING

The simulation level of autonomy (LOA) is a function of its components

Thalmann et al. (2000) distinguish three levels of autonomy. The

simulation components can be either “guided” (lowest), “programmed” (intermediate”) and “autonomous (high)

Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups)) Simulation LOA = f(LOA(Objects),LOA(Agents),LOA(Groups))

Interactive object Interactive object Intelligent agent Intelligent agent Group of agents Group of agents

Autonomous Programmed Guided Autonomous Programmed Guided Autonomous Programmed Guided

Simulation Simulation LOA LOA

adapted from (Thalmann et al., 2000)

slide-71
SLIDE 71

Interactive objects: Interactive objects:

Have behavior independent of user’s input (ex. clock);

This is needed in large virtual environments, where it is

impossible for the user to provide all required inputs.

System clock System clock Automatic door Automatic door – – reflex behavior reflex behavior

slide-72
SLIDE 72

Interactive objects: Interactive objects:

The fireflies in NVIDIA’s Grove have behavior

independent of user’s input. User controls the virtual camera;

slide-73
SLIDE 73

Agent behavior: Agent behavior:

A behavior model composed of perception, emotions,

behavior, and actions;

Perception (through virtual sensors) makes the agent

aware of his surroundings.

Perception Perception Emotions Emotions Behavior Behavior Actions Actions Virtual world Virtual world

slide-74
SLIDE 74

Reflex behavior: Reflex behavior:

A direct link between perception and actions (following

behavior rules (“cells”);

Does not involve emotions.

Perception Perception Emotions Emotions Behavior Behavior Actions Actions

slide-75
SLIDE 75

Object behavior Object behavior

Another example of reflex behavior – “Dexter” at MIT [Johnson, 1991]: Hand shake, followed by head turn

Autonomous virtual human Autonomous virtual human User User-

  • controlled hand avatar

controlled hand avatar

slide-76
SLIDE 76

Agent behavior Agent behavior -

  • avatars

avatars

If user maps to a full-body avatar, then virtual human agents react through body expression recognition: example dance. Swiss Institute of Technology, 1999 (credit Daniel Thalmann)

Autonomous virtual human Autonomous virtual human User User-

  • controlled hand avatar

controlled hand avatar

slide-77
SLIDE 77

Emotional behavior: Emotional behavior:

A subjective strong feeling (anger, fear) following perception;

Two different agents can have different emotions to the same

perception, thus they can have different actions.

Virtual world Virtual world Emotions 2 Emotions 2 Perception Perception Behavior Behavior Actions 2 Actions 2 Perception Perception Emotions 1 Emotions 1 Behavior Behavior Actions 1 Actions 1

slide-78
SLIDE 78

Crowds behavior Crowds behavior

(Thalmann et al., 2000) Crowd behavior emphasizes group (rather than individual) actions; Crowds can have guided LOA, when their behavior is defined explicitly by the user; Or they can have Autonomous LOA with behaviors specified by rules and other complex methods (including memory).

Political demonstration Political demonstration Guided crowd Guided crowd

User needs to specify Intermediate path points

Autonomous crowd Autonomous crowd

Group perceives info on its environment and decides a path to follow to reach the goal

slide-79
SLIDE 79

MODEL MANAGEMENT MODEL MANAGEMENT

It is necessary to maintain interactivity and

constant frame rates when rendering complex

  • models. Several techniques exist:

Level of detail management; Cell segmentation; Off-line computations; Lighting and bump mapping at rendering

stage;

Portals.

slide-80
SLIDE 80

Level of detail management: Level of detail management:

Level of detail (LOD) relates to the number of polygons

  • n the object’s surface. Even if the object has high

complexity, its detail may not be visible if the object is too far from the virtual camera (observer).

Tree with 27,000 polygons Tree with 27,000 polygons Tree with 27,000 polygons Tree with 27,000 polygons (details are not perceived) (details are not perceived)

slide-81
SLIDE 81

Static level of detail management: Static level of detail management:

Then we should use a simplified version of the object

(fewer polygons), when it is far from the camera.

There are several approaches: Discrete geometry LOD; Alpha LOD; Geometric morphing (“geo-morph”) LOD.

slide-82
SLIDE 82

Discrete Geometry LOD: Discrete Geometry LOD:

Uses several discrete models of the same virtual object;

Models are switched based on their distance from the

camera (r < r0; r0 < r < r1; r1 < r < r2; r2 < r)

LOD 0 LOD 0 LOD 1 LOD 1 LOD 2 LOD 2

r2 r0 r1

slide-83
SLIDE 83

Alpha Blending LOD: Alpha Blending LOD:

Discrete LOD have problems on the r0 = r, r1 = r, r2 = r circles, leading to “popping” and cycling. Objects appear and disappear

  • suddenly. One solution to cycling is distance hystheresis.

A solution to popping is alpha blending by changing the

transparency of the object. Fully transparent objects are not rendered.

Opaque Opaque Less opaque Less opaque Fully transparent Fully transparent LOD 0 LOD 0 LOD 1 LOD 1 LOD 2 LOD 2

r2 r0 r1

Hystheresis Hystheresis zone zone

slide-84
SLIDE 84

Geometric Morphing LOD: Geometric Morphing LOD:

Unlike geometric LOD, which uses several models of the same

  • bject, geometric morphing uses only one complex model.

Various LOD are obtained from the base model through mesh

simplification

A triangulated polygon mesh: n vertices has 2n faces and 3n edges

Collapsing edges Collapsing edges V2 V2 V1 V1 V1 V1

Mesh Mesh before before simplification simplification Mesh Mesh after after simplification simplification

slide-85
SLIDE 85

Single Single-

  • Object adaptive level of detail LOD:

Object adaptive level of detail LOD:

Used where there is a single highly complex object that the user wants to inspect (such as in interactive scientific visualization.

Static LOD will not work since detail is lost where needed-

example the sphere on the right loses shadow sharpness after LOD simplification.

Sphere with 8192 triangles Sphere with 8192 triangles – – Uniform high density Uniform high density Sphere with 512 triangles Sphere with 512 triangles – – Static LOD simplification Static LOD simplification (from Xia et al, 1997)

slide-86
SLIDE 86

Single Single-

  • object Adaptive Level of Detail
  • bject Adaptive Level of Detail

Sometimes edge collapse leads to problems, so vertices need to be split again to regain detail where needed. Xia et al. (1997) developed an adaptive algorithm that determines the level of detail based on distance to viewer as well as normal direction (lighting).

V2 V2 V1 V1 Vertex Split Vertex Split V1 V1

Refined Mesh Refined Mesh Simplified Mesh Simplified Mesh

Edge collapse Edge collapse V1 is the V1 is the “ “parent parent” ” vertex vertex

(adapted from Xia et al, 1997)

slide-87
SLIDE 87

Single Single-

  • object Adaptive Level of Detail
  • bject Adaptive Level of Detail

Sphere with 537 triangles Sphere with 537 triangles – – adaptive LOD, 0.024 sec adaptive LOD, 0.024 sec to render (SGI RE2, single to render (SGI RE2, single R10000 workstation) R10000 workstation) Sphere with 8192 triangles Sphere with 8192 triangles – – Uniform high density, Uniform high density, 0.115 sec to render 0.115 sec to render (from Xia et al, 1997)

slide-88
SLIDE 88

Single Single-

  • object Adaptive Level of Detail
  • bject Adaptive Level of Detail

Bunny with 3615 triangles Bunny with 3615 triangles – – adaptive LOD, 0.110 sec adaptive LOD, 0.110 sec to render (SGI RE2, single to render (SGI RE2, single R10000 workstation) R10000 workstation) Bunny with 69,451 Bunny with 69,451 triangles triangles – – Uniform high Uniform high density, 0.420 sec to render density, 0.420 sec to render (from Xia et al, 1997)

slide-89
SLIDE 89

LOD 1 LOD 1 LOD 2 LOD 2

Static LOD: Static LOD:

Geometric LOD, alpha blending and morphing have problems maintaining a constant frame rate. This happens when new complex

  • bjects appear suddenly in the scene (fulcrum).

LOD 1 LOD 1 LOD 2 LOD 2

frame i frame i+1

Camera Camera “ “fly fly-

  • by

by” ”

f u l c r u m fulcrum

slide-90
SLIDE 90

Architectural Architectural “ “walk walk-

  • through

through” ” (UC Berkeley Soda Hall) (UC Berkeley Soda Hall)

Camera path through auditorium No LOD No LOD menegament menegament, 72,570 polygons , 72,570 polygons from (Funkhauser and Sequin, 1993)

A B C

Start Start End End

A A C B

Time 0.2 sec Time 0.2 sec

C B

Time 1.0 sec Time 1.0 sec 0 Frames 250 0 Frames 250 0 Frames 250 0 Frames 250

No LOD management Static LOD management

slide-91
SLIDE 91

Adaptive LOD Management Adaptive LOD Management-

  • continued:

continued:

An algorithm that selects LOD of visible objects based on a specified frame rate;

The algorithm (Funkhauser and Sequin, 1993) is based on a

benefits to cost analysis, where cost is the time needed to render Object O at level of detail L, and rendering mode R.

The cost for the whole scene is where the cost for a single object is

Σ Σ Cost (O,L,R) Cost (O,L,R) ≤ ≤ Target frame time Target frame time

Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixel Cost (O,L,R) = max (c1Polygons(O,L) + c2 Vertices(O,L), c3 Pixels(O,L)) s(O,L)) c1, c2, c3 are experimental constants, depending on R and type o c1, c2, c3 are experimental constants, depending on R and type of computer f computer

slide-92
SLIDE 92

Adaptive LOD Management: Adaptive LOD Management:

Similarly the benefit for a scene is a sum of visible

  • bjects benefits;

where the benefit of a given object is

  • Objects with higher value are rendered first

Sort according to value, display objects until target cost

is reached Σ Σ Benefit(O,L,R) Benefit(O,L,R)

Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Fo Benefit(O,L,R) = size(O) * Accuracy(O,L,R) * Importance(O) * Focus(O) * cus(O) * Motion(O) * Motion(O) * Hysteresis(O,L,R Hysteresis(O,L,R) )

Value= Benefit(O,L,R)/Cost(O,L,R) Value= Benefit(O,L,R)/Cost(O,L,R)

slide-93
SLIDE 93

Level of detail segmentation Level of detail segmentation -

  • elision

elision

No detail elision, 72,570 polygons No detail elision, 72,570 polygons Optimization algorithm, 5,300 poly. Optimization algorithm, 5,300 poly. 0.1 sec target frame time (10 fps) 0.1 sec target frame time (10 fps) from (Funkhauser and Sequin, 1993)

Time 1.0 sec Time 1.0 sec Time 0.2 sec Time 0.2 sec 0 Frames 250 0 Frames 250 0 Frames 250 0 Frames 250

A C

slide-94
SLIDE 94

Level of detail segmentation Level of detail segmentation – – rendering mode rendering mode Optimization, 1,389 poly., 0.1 sec target frame time

from (Funkhauser and Sequin, 1993)

No detail elision, 19,821 polygons Level of detail – darker gray means more detail

slide-95
SLIDE 95

It is another method of model management,

used in architectural walk-through;

To maintain the “virtual building” illusion it

is necessary to have at least 6 fps (Airey et al., 1990)

Necessary to maintain interactivity and

constant frame rates when rendering complex models. Cell segmentation: Cell segmentation:

slide-96
SLIDE 96

Model management Model management

Only the current “universe” needs to be rendered

PVS (Potentially Visible Sets) PORTALS

slide-97
SLIDE 97

Cell segmentation Cell segmentation – – increased frame rate increased frame rate

Buildings are large models that can be partitioned in “cells” automatically and off-line to speed up simulations at run time; Cells approximate rooms; Partitioning algorithms use a “priority” factor that favors

  • cclusions (partitioning along walls)

Automatic floor plan partition (Airey et al., 1990)

slide-98
SLIDE 98

Cell segmentation Cell segmentation

From (Funkhauser, 1993)

Building model resides in a fully associative cache; But cell segmentation alone will not work if the model is so large that it exceeds available RAM; In this case large delays will occur when there is a page fault and data has to be retrieved from hard disk;

Page faults Page faults Frame time (s) Frame time (s) Frames Frames

slide-99
SLIDE 99

Combined Cell, LOD and database Combined Cell, LOD and database methods methods

Floor plan partition (Funkhouser, 1993)

User User Interface Interface Visibility Visibility Determ Determ. . Detail Detail Ellision Ellision Render Render Monitor Monitor Look Look-

  • ahead

ahead Determ Determ. . Cache Cache Manage Manage-

  • ment

ment I/O I/O Oper Oper Database Database

Database management Database management

It is possible to add database management techniques to prevent page faults and improve fps uniformity during walk-through; It is possible to estimate how far the virtual camera will rotate and translate over the next N frames and pre-fetch from the hard disk the appropriate objects.

slide-100
SLIDE 100

Database management Database management

LOD 1 LOD 1 LOD 0 LOD 0 LOD 3 LOD 3 LOD 2 LOD 2 LOD 1 LOD 1 LOD 0 LOD 0 LOD 2 LOD 2 LOD 1 LOD 1 LOD 0 LOD 0 LOD 0 LOD 0 Frame time (s) Frame time (s) Frames Frames

Floor plan visibility and highest LOD (Funkhouser, 1990)

LOD 0 LOD 0 – – lowest level of detail (loaded first) lowest level of detail (loaded first) … …. . LOD 3 LOD 3 -

  • highest level of detail (loaded last)

highest level of detail (loaded last)

High-LOD are loaded for adjacent cells only

slide-101
SLIDE 101

Distributed VR architectures Distributed VR architectures

Single-user systems: multiple side-by-side displays; multiple LAN-networked computers; Multi-user systems:

slide-102
SLIDE 102

(3DLabs Inc.)

Single Single-

  • user, multiple displays

user, multiple displays

slide-103
SLIDE 103

Side Side-

  • by

by-

  • side displays.

side displays.

Used is VR workstations (desktop), or in large

volume displays (CAVE or the “Wall”);

One solution is to use one PC with graphics

accelerator for every projector;

This results is a “rack mounted” architecture,

such as the MetaVR “Channel Surfer” used in flight simulators or the Princeton Display Wall

slide-104
SLIDE 104

Genlock Genlock.. ..

If the output of two or more graphics pipes is

used to drive monitors placed side-by-side, then the display channels need to be synchronized pixel-by-pixel;

Moreover, the edges have to be blended, by

creating a region of overlap.

slide-105
SLIDE 105

(Courtesy of Quantum3D Inc.)

slide-106
SLIDE 106

Problems with non Problems with non-

  • synchronized displays...

synchronized displays...

CRTs that are side-by-side induce fields in

each other, resulting in electronic beam distortion and flickers – need to be shielded;

Image artifacts reduce simulation realism,

increase latencies, and induce “simulation sickness.”

slide-107
SLIDE 107

(Courtesy of Quantum3D Inc.)

slide-108
SLIDE 108

Synchronization of displays: Synchronization of displays:

software synchronized – system commands that frame processing start at same time on different rendering pipes;

does not work if one pipe is overloaded – one image

Application Application Geometry Geometry Rasterizer Rasterizer

Buffer CRT

Application Application Geometry Geometry Rasterizer Rasterizer

Buffer CRT Synchronization command

slide-109
SLIDE 109

Synchronization of displays: Synchronization of displays:

frame buffer synchronized – system commands that frame buffer swappings start at same time on different rendering pipes;

does not work because swapping depends on electronic gun

refresh - one buffer will swap up to 1/72 sec before the other. Application Application Geometry Geometry Rasterizer Rasterizer

Buffer CRT

Application Application Geometry Geometry Rasterizer Rasterizer

Buffer CRT Synchronization command

slide-110
SLIDE 110

Synchronization of displays: Synchronization of displays:

video synchronized – system commands that CRT vertical beam starts at same time; one CRT becomes the “master” Application Application Geometry Geometry Rasterizer Rasterizer

Buffer Master CRT

Application Application Geometry Geometry Rasterizer Rasterizer

Buffer Slave CRT Synchronization command

slide-111
SLIDE 111

Video synchronized displays (three PCs) Video synchronized displays (three PCs)

Wildcat 4210

(Digital Video Interface- Video out)

release release done done

slide-112
SLIDE 112

Synchronization of displays: Synchronization of displays:

Best method is to have software + buffer + video

synchronization of the two (or more) rendering pipes Application Application Geometry Geometry Rasterizer Rasterizer

Buffer Master CRT

Application Application Geometry Geometry Rasterizer Rasterizer

Buffer Slave CRT Synchronization command Synchronization command Synchronization command

slide-113
SLIDE 113

(Courtesy of Quantum3D Inc.)

slide-114
SLIDE 114

Graphics and Graphics and Haptics Haptics Pipeline Synchronization: Pipeline Synchronization:

Has to be done at the application stage to allow decoupling of

the rendering stages (have vastly different output rates)