COMP4076 / GV07 - Virtual Environments: Tracking and Interaction - - PowerPoint PPT Presentation

comp4076 gv07 virtual environments tracking and
SMART_READER_LITE
LIVE PREVIEW

COMP4076 / GV07 - Virtual Environments: Tracking and Interaction - - PowerPoint PPT Presentation

COMP4076 / GV07 - Virtual Environments: Tracking and Interaction Wole Oyekoya Department of Computer Science University College London w.oyekoya@cs.ucl.ac.uk http://www.cs.ucl.ac.uk/teaching/VE Outline Introduction Models of


slide-1
SLIDE 1

COMP4076 / GV07 - Virtual Environments: Tracking and Interaction

Wole Oyekoya Department of Computer Science University College London w.oyekoya@cs.ucl.ac.uk http://www.cs.ucl.ac.uk/teaching/VE

slide-2
SLIDE 2

Outline

  • Introduction
  • Models of Interaction
  • Interaction Methods
slide-3
SLIDE 3

Introduction

  • Introduction
  • Models of Interaction
  • Interaction Methods
slide-4
SLIDE 4

Tracking and Interaction

User Input Devices Computer User Synthetic Environment Real Environment Tracking and interaction happens here

slide-5
SLIDE 5

Basic Components of an Interface

  • The input devices captures user actions
  • Transfer functions / control-display mappings /

interaction techniques

– Map the movement of the device into the movement of the controlled elements – Isomorphic: one-to-one mapping between motions in the physical and virtual worlds – Non-isomorphic: input is shifted, scaled, integrated, …

  • The display devices present the effects of the

input to the user

slide-6
SLIDE 6

What Are the Basic Interaction Tasks?

  • Locomotion or Travel

– How to move through the space

  • Selection

– How to indicate an object of interest

  • Manipulation

– How to change properties of an object of interest

  • Symbolic

– How to enter text and other parameters

  • System control

– How to change the way the system is behaving

Won’t be covered in this lecture

slide-7
SLIDE 7

Challenges in Designing Metaphors?

  • Designing interaction metaphors for virtual

environments is hard:

– Six degrees of freedom

  • Lack of appropriate input devices

– Isolated parts of body tracked – Boxing glove or fishing rod style of interaction metaphors

  • Divide and conquer the problem by identifying

basic models of interaction

slide-8
SLIDE 8

Models of Interaction

  • Introduction
  • Models of Interaction
  • Interaction Methods
slide-9
SLIDE 9

Models of Interaction

  • Extended Desktop Model

– The computer generates a 3D version of the familiar desktop – The user needs tools to do 3D tasks

  • Virtual Reality Model

– The user’s body is an interface to the world – The system responds to everything they do or say

slide-10
SLIDE 10

Extended Desktop Model

  • The desktop is now in 3D
  • It can extend beyond the

boundaries of the computer screen itself

  • However, the user:

– is not part of the scene – has a “God’s eye view”

slide-11
SLIDE 11

Interaction in the Extended Desktop

  • Focus on analysing a task and creating devices

that fit the task

  • Study ergonomics of the device and

applicability/suitability for the role

  • Special-purpose devices can be developed to

support interaction

slide-12
SLIDE 12

2D Interaction in a 3D World: Google Earth

slide-13
SLIDE 13

Modelling Package (3D Studio Max)

slide-14
SLIDE 14

Types of Device to Enable 3D Interaction

Ascension Wanda 3DConnexion Spaceball Polhemus Isotrak 3-Ball Logitech 3D Mouse 3DConnexion Spacemouse Inition 3DiStick Nintendo Wii Remote

slide-15
SLIDE 15

GlobeFish and GlobeMouse

slide-16
SLIDE 16

Limitations of the Extended Desktop Model

  • 3D tasks can be quite

complicated to perform

  • Tasks can become very

specialised

– Counterintuitive – Requires a lot of user training

Fakespace Cubic Mouse

slide-17
SLIDE 17

Virtual Reality Model

  • Track the user as they

move through a genuine 3D space

  • Need to track the user

precisely and interpret what they do

  • Focus is on users

exploring the environment

slide-18
SLIDE 18

Interaction in the Virtual Reality Model

  • Tension between isomorphic and non-isomorphic

movements

– Isomorphic: one-to-one mapping between motions in the physical and virtual worlds – Non-isomorphic: input is shifted, scaled, integrated, …

  • Tension between mundane and magical responses of the

environment

– Mundane are where the dynamics are governed by the familiar laws of physics (Newtonian mechanics) – Magical are everything else (casting spells, automatic doors, etc…)

slide-19
SLIDE 19

Body-Relative Interaction

Technique based on Proprioception

  • Provides

– Physical frame of reference in which to work – More direct and precise sense of control – “Eyes off” interaction

  • Enables

– Direct object manipulation (for sense of position of object) – Physical Mnemonics (objects fixed relative to body) – Gestural Actions (invoking commands)

slide-20
SLIDE 20

Hand-Held Widgets

  • Hold controls in hands,

rather than on objects

  • User relative motion of

hands to effect widget changes

Mine, Brooks Jr, Sequin

slide-21
SLIDE 21

Gestural Actions

  • Head – butt zoom
  • Look – at Menus
  • Two – handed flying
  • Over – the – shoulder

deletion

Mine, Brooks Jr, Sequin

slide-22
SLIDE 22

Limitations of the Virtual Reality Model

  • Can’t track user over very large

areas

– E.g. Some form of locomotion metaphor will be required for long distance travel (see later)

  • Physical constraints of systems
  • Limited precision and tracking

points

  • Lack of physical force feedback
slide-23
SLIDE 23

Overcoming Lack of Force Feedback

  • One way to overcome lack
  • f force feedback is to use

a haptic device

– Will be discussed in another lecture

  • Another approach is to

exploit visual dominance in the interpretation of cues

  • CyberForce CyberGrasp
slide-24
SLIDE 24

Visual Dominance

  • The real hand is not

constrained in space

  • The virtual hand can be

constrained in virtual space

  • Can the user detect the

difference?

“The Hand is Slower than the Eye: A quantitative exploration of visual dominance over proprioception Burns, Whitton, Razzaque, McCallus, Panter, Brooks

slide-25
SLIDE 25

Visual Dominance

Task: playing Simon game Drift between virtual and real hand gradually introduced over time

slide-26
SLIDE 26

Summary of Interaction Methods

  • The extended desktop model:

– Desktop extends beyond physical screen – Interactions and devices on a case-by-case basis – Potentially more accurate, but counterintuitive and specialised interaction

  • The virtual reality model:

– User’s body input to system – Potentially more intuitive but more general – Greater reliance on ability to have natural movement and ability to track – Partially resolved using visual dominance for HMDs

slide-27
SLIDE 27

Interaction Methods

  • Introduction
  • Models of Interaction
  • Interaction Methods
slide-28
SLIDE 28

Basic Interaction Tasks

  • Locomotion or Travel

– How to effect movement through the space

  • Selection

– How to indicate an object of interest

  • Manipulation

– How to move an object of interest

  • Symbolic

– How to enter text and other parameters

  • System control

– Change mode of interaction or system state

Won’t be covered in this lecture Logically grouped together

slide-29
SLIDE 29

Locomotion

  • Introduction
  • Models of Interaction
  • Interaction Methods

– Locomotion or Travel Techniques – Selection and Manipulation

slide-30
SLIDE 30

Purpose of Locomotion

  • Change the pose of the viewpoint (both position

and attitude) from some start location A to some end location B

  • This is the most fundamental task for a virtual

environment

– Arguably if the user can’t change the pose of the viewpoint, it’s not really a virtual environment at all

slide-31
SLIDE 31

Types of Travel Techniques

  • There are two fundamentally different types:
  • Virtual techniques:

– The user’s body remains stationary even through the viewpoint moves

  • Physical techniques:

– The user’s physical motion is used to transport the user through the virtual world

slide-32
SLIDE 32

Virtual Locomotion Techniques

  • The user remains

stationary even though the viewpoint moves

  • Techniques must be used

to specify

– Direction – Speed

slide-33
SLIDE 33

Taxonomy of Virtual Locomotion Techniques

Bowman, Koller and Hodges

slide-34
SLIDE 34

Taxonomy of Virtual Locomotion Techniques

Bowman, Koller and Hodges

slide-35
SLIDE 35

Controlling Direction by Steering

  • Continuous specification
  • f direction of motion
  • Direction can be specified

by many means:

– Gaze-directed – Pointing – Physical device (steering wheel, flight stick)

slide-36
SLIDE 36

Steering Techniques

  • Gaze-based:

– Actually uses head orientation – Cognitively simple – Cannot look around whilst travelling

  • Pointing-based:

– Actually uses on hand orientation – Cognitively more complicated – Makes it possible to look in one direction whilst travelling in another – However, you can’t hold other

  • bjects or manipulate them whilst

travelling

slide-37
SLIDE 37

Target-Based Steering

  • Specify discrete target or goal
  • Multiple ways to specify the

goal:

– Point at object – Choose from list – Enter coordinates

  • Once specified, travel to the

target is passive

  • Convenient way to get from A

to B, but inflexible

slide-38
SLIDE 38

Route-Based Steering

  • Generalisation of the

target-based metaphor

  • User specifies multiple

waypoints on a map

  • The system generates a

path and passively moves the user along the path

  • Placement of waypoints

controls granularity of movement

slide-39
SLIDE 39

Taxonomy of Virtual Locomotion Techniques

Bowman, Koller and Hodges

slide-40
SLIDE 40

Two-Handed Velocity / Acceleration Selection

  • Line between hands defines

direction of travel

  • Distance between hands

specifies constant speed

  • Pinch glove gestures provide

“nudges”

Mine, Brooks Jr, Sequin

slide-41
SLIDE 41

“Grabbing the Air”

  • Use hand gestures to move

through the world

  • Metaphor is pulling a rope
  • Often implemented using

pinch gloves

  • Physically occupies hands
  • Slow
  • Fatiguing
slide-42
SLIDE 42

Camera-in-Hand

  • A tracker is held in the hand and the camera viewpoint is

slaved to it using an appropriate set of transformations

  • This defines a small “workspace” on a table top
  • Travel simply involves movement of the hand through the

workspace

slide-43
SLIDE 43

How Good Are These Techniques?

  • Unfortunately there have been no thorough

comparison of all the different techniques which have been developed

  • One of the first rigorous studies of some of the

trade-offs between different travel techniques

– Travel in Immersive Virtual Environments: An Evaluation of Viewpoint Motion Control Techniques, Bowman, Koller and Hodges

slide-44
SLIDE 44

Quality Factors

  • 1. Speed (appropriate velocity)
  • 2. Accuracy (proximity to the desired target)
  • 3. Spatial Awareness (the user’s implicit knowledge of his position and
  • rientation within the environment during and after travel)
  • 4. Ease of Learning (the ability of a novice user to use the technique)
  • 5. Ease of Use (the complexity or cognitive load of the technique from the

user’s point of view)

  • 6. Information Gathering (the user’s ability to actively obtain information

from the environment during travel)

  • 7. Presence (the user’s sense of immersion or “being within” the

environment)

slide-45
SLIDE 45

Experiment 1: Absolute Motion Task

  • Absolute motion task

– Move until you are inside a virtual sphere – Gaze v. Point AND constrained v. unconstrained

  • Constrained disallowed movement in z direction

– Hypothesis - gaze expected to be better

  • Neck muscles are more stable
  • More immediate feedback
  • Eight subjects, each doing four times 80 trials (five times 4

distances to target, four target sizes)

slide-46
SLIDE 46

Experiment 1

Bowman, Koller and Hodges

  • Results
  • No difference between techniques
  • Significant factors were target distance and size
slide-47
SLIDE 47

Experiment 2: Relative Motion Task

  • Relative motion task
  • Move to a position

defined relative to a virtual object

  • Need forward and

reverse direction

  • Nine subjects, four sets
  • f 20 trials

Bowman, Koller and Hodges

slide-48
SLIDE 48

Experiment 2

  • Obvious difference
  • Can’t point at target and look at departure

point simultaneously

Bowman, Koller and Hodges

slide-49
SLIDE 49

Summary of First Two Experiments

Bowman, Koller and Hodges

slide-50
SLIDE 50

Experiment 3: Spatial Awareness Task

  • Environment consists
  • f cubes of contrasting

colours

  • User asked to identify

cube and push L or R

  • n mouse button
  • User travels to a

different location and repeats the trial

Bowman, Koller and Hodges

slide-51
SLIDE 51

Experiment 3

  • Testing spatial awareness based on four travel

variations

– Constant speed (slow) – Constant speed (fast) – Variable speed (smooth acceleration) – Jump (instant translation)

  • Concern is that jumps and other fast transitions

will confuse users

slide-52
SLIDE 52

Experiment 3

  • Slow and fast velocity

made no difference to task time

  • However, jumping from

point to point lead to significant disorientation and increased task execution time

slide-53
SLIDE 53

Virtual Locomotion Summary

  • Virtual locomotion or travel occurs when the

viewpoint changes but there is minimal physical interaction from the user

  • The interaction metaphors have to specify the

direction, velocity and input conditions

  • Many metaphors have been proposed
  • Few have been analysed:

– Pointing is better for relative tasks – Jumping from location to location is disorienting

  • An alternative is to have physical locomotion
slide-54
SLIDE 54

Physical Locomotion

  • Map locomotion “as naturally as possible” to

human movement

slide-55
SLIDE 55

Direct Locomotion

  • There is an isomorphic

mapping between the real- world and the virtual environment

  • The user physically walks

from point A to point B

  • Intuitive and easy to use
  • Requires a suitably large

environment

  • Requires a wide area

tracking system

slide-56
SLIDE 56

Walking-in-Place

  • User “walks in place”
  • Movement detected by

gait analysis

– Trackers just on feet – Trackers over entire body

  • No perceptual mismatch

– Deep down, user knows that their gait won’t make them move

slide-57
SLIDE 57
slide-58
SLIDE 58

Which Works Best?

  • Despite the huge number of different approaches

which have been developed, relatively few studies have compared the strengths and weaknesses of different approaches

  • Travel techniques studied by Bowman et al.
  • Comparison of real walking, virtual walking and

flying by Usoh et al.

slide-59
SLIDE 59

Walking, Virtual Walking and Flying

  • Comparison of real walking, walking in place and

gaze-based flying

  • Test scenario was the pit

– Get around the edge to the chair

  • 33 naïve subjects and 11 expert subjects recruited

Walking > Walking-in-Place > Flying, in Virtual Environments, Usoh, Arthur, Whitton, Basto, Steed, Slater, Brooks

slide-60
SLIDE 60

Results of the Study

slide-61
SLIDE 61

Real vs. Virtual Walking vs. Flying

  • Real walking

– Best for human-scale spaces – Expensive to implement

  • Virtual walking

– Better than flying – Inexpensive to implement

  • Limitations of study:

– Walking-in-place implementation was poor – Avatar realism – Scenario incongruity

slide-62
SLIDE 62

Redirected Walking

Fire drill scenario

Redirected Walking, Razzaque, Kohn, Whitton

Based on the premise that rotating the virtual scene around a user makes the user turn themself and navigate a much larger area without the user’s awareness

slide-63
SLIDE 63

Redirected Walking

Plan view of scenario Plan view of lab area

slide-64
SLIDE 64

Redirected Walking

slide-65
SLIDE 65

Redirected Walking

  • Apply an angular

distortion:

– Small constant offset causes viewpoint to slowly rotate – Increase distortion rate proportionally with user’s walking speed – Increase distortion rate proportionally with user’s angular velocity of head

slide-66
SLIDE 66

Redirected Walking in the CAVE

  • Problems with walking in

the CAVE:

– You eventually hit the walls – You can turn and see the missing back wall

  • One means of countering

this is to rotate the environment

– The user is directed back to the front wall

Redirected Walking in Place, Razzaque, Swapp,Slater, Whitton, Steed

slide-67
SLIDE 67

Redirected Walking in the CAVE

  • Apply a small rotation to the scene to

cause user to turn towards centre

– Sufficiently small that not consciously noticed – Subject responds to maintain balance

  • Increase rate when user is navigating or

rapidly turning head

  • Results:

– Variance in number of times user saw back wall decreased – Rates of simulator sickness were not increased – Some users did not notice the rotation

slide-68
SLIDE 68

Constrained Walking

  • User walks but motion is

constrained

– VirtuSphere – Treadmills

  • However, most forms can

be very difficult to use

– Mismatch in perceptual cues – Dynamics / inertia of device make it hard to navigate effectively

slide-69
SLIDE 69

VirtuSphere

slide-70
SLIDE 70

Summary of Experimental Results

  • There is no single best virtual locomotion

technique

– Context dependent – Graceful transitional motions should be used if subjects should understand context of environment

  • Physical location provides greatest presence

– Only works for human-sized spaces – Needs lots of room!

slide-71
SLIDE 71

Manipulation

  • Introduction
  • Models of Interaction
  • Interaction Methods

– Locomotion – Manipulation

slide-72
SLIDE 72

Manipulation

  • Manipulation of the environment consists of

changing the contents of the virtual environment

  • The term is extremely broad:

– Complex manipulation of an object’s structure, such as moulding or sculpting – Changing abstract properties of an object such as its

  • wnership
  • We consider rigid body manipulation tasks only

– Operations such as scaling or distorting are frequently implemented as rigid body manipulations of widgets

slide-73
SLIDE 73

Canonical Manipulation Tasks

  • Selection

– Task of acquiring or identifying a particular object from the set of objects available

  • Positioning

– Change the real-world position of an object

  • Rotation

– Change the attitude of an object

  • This has a direct analogy with 2D GUIs
slide-74
SLIDE 74

Taxonomy of Manipulation

slide-75
SLIDE 75

Direct Interaction Techniques

  • User selects and

manipulates an object using a virtual hand

  • A 3D cursor visualises the

current locus of user input

  • The user intersects the

cursor with an object and uses a trigger technique

  • The object is rigidly

attached to the user’s hand

slide-76
SLIDE 76

Simple Virtual Hand

  • There is a direct mapping from the user’s hand to the movement
  • f the virtual hand

– Linear displacement often scaled – Orientation is not

  • Mapping from inputs to hand movement is achieved through a

transfer function

  • However, if users have to manipulate an object out of arm’s

reach, they have to travel to get it

slide-77
SLIDE 77

Go-Go Interaction

  • Nonlinear mapping

between the user’s hand and the virtual hand

  • Close in, the hand behaves

like a simple virtual arm

  • Further out, it becomes

nonlinear scaled

  • Users generally found it

easy to understand and use for manipulation

Poupyrev et al. / Egocentric Object Manipulation in VEs: Empirical Evaluation of Interaction Techniques

slide-78
SLIDE 78

World in Miniature Technique

  • Rather than scale the user’s

hand, scale the world

  • Smaller version of the world

created and superimposed on the real world

  • User controls WIM using

hanheld ball

  • Can interact with environment by

selecting 1:1 scale or same

  • bject on WIM

World in Miniature, Stoakley and Pausch

slide-79
SLIDE 79

Indirect Interaction Techniques

  • User works with objects

beyond arms reach

  • Points at object with their

hand

  • Selects by pressing a

button or some other discrete action

  • Object then becomes tied

to the user in some way for manipulation

slide-80
SLIDE 80

Ray-Based Interaction

  • Ray-Based

– Ray is centred on user’s hand – All manipulations are relative to hand motion

  • Translation in beam direction is

hard

  • Rotation in local object

coordinates is nearly impossible

  • Picking small objects in the far

distance is hard because a very high degree of angular accuracy is required

Mark Mine, http://www.cs.unc.edu/~mine/isaac.html

slide-81
SLIDE 81

Aperture Technique

  • The angle subtended by the

selection cone can be changed by the user

– Head and hand tracked – Line through head and hands defines cone direction – Distance between head and hands defines angle of the cone

  • The rotation of the hand sensor

can be used to perform further disambiguation

– User orients hand held wand to align with orientation of object to be picked up

slide-82
SLIDE 82

Image-Plane Technique

  • User selects 3D
  • bjects by

manipulating their 2D projection on a virtual image plane

  • If single hand tracked,

aperture technique

  • If multiple hands

tracked, head crusher technique

slide-83
SLIDE 83

Flashlight Technique

  • Rather than use a

single ray for selection, selection occurs over a selection cone

  • Disambiguation

strategies include:

– Object closest to centre line selected – Object closest to user selected

slide-84
SLIDE 84

Hybrid Techniques

  • All manipulation events are based on a repeated

task sequence of selection followed by manipulation

  • Therefore, different tasks can use different

interaction metaphors

  • Each metaphor can be optimised for the particular

task in the sequence it’s been assigned to do

slide-85
SLIDE 85

Scaled World Grab

  • The user selects an object using an appropriate image

plane-based technique

  • Once selected, the whole world is scaled down to the bring

the object in the user’s reach

  • The scaling is carried about a point midway between the

user’s eyes

– The user does not notice the scaling operation because the world does not change visually

  • Near and far objects easily moved
  • However, slight head movements can massively change

the view of the model

  • Problems taking a small, close model and moving it further

away

slide-86
SLIDE 86

“Over-generalized findings from other designer’s experiences are more apt to be right than the designer’s uniformed intuition” (Brooks, 1988)

What’s Best?

  • Only a few studies of different interaction

techniques have been completed

  • Confounding factors:

– Internal validity: the experiment actually tests the property you want to test – External validity: the results of the experiment can be generalised to other scenarios where experiments have not been conducted

slide-87
SLIDE 87

Evaluating Selection

  • Ray-casting and image-plane are generally more effective

than Go-Go

– Exception: high precision selection, e.g. small or far away objects, can be easier with Go-Go

  • Different studies get significant differences in performance:

– Poupyrev: the difference between Go-Go and pointing was not large (10 to 20 percent) – Bowman: the difference between Go-Go and pointing was significant (20 to 60 percent) – Probably due to differences in implementation

  • Ray-casting techniques can be approximated as 2D

techniques

slide-88
SLIDE 88

Evaluating Manipulation

  • Very difficult to do:

– Large amount of variables affecting user performance: direction of movement, distance,accuracy

  • Preliminary positioning experiments indicate that:

– Ray is effective for repositioning at a constant distance and within the user reach – Go-Go and scaled world grab have been reported effective in some positioning tasks

  • Mine independently studied effects of proprioception on

manipulation

slide-89
SLIDE 89

Docking Cube Experiment

  • Align docking cube with target cube as quickly as

possible

  • Comparing three manipulation techniques:

– Object in hand – Object at fixed distance – Object at variable distance (scaled by arm extension)

  • In hand significantly faster
slide-90
SLIDE 90

Shaping 3D Boxes Experiment

  • Study of two handed selection of volumetric data

for box creation using 3D tracking equipment:

– Hand-on-Corner (HOC): User’s non-dominant hand holds one corner of the box while the dominant hand controls the position of the opposite corner. – Hand-in-Middle (HIM): Similar to HOC except that non-dominant hand is positioned in the middle. – Two Corners (TC): User shapes a box by dragging apart two diagonally opposite corners of the box.

  • TC outperforms others with respect to both

accuracy and completion times.

  • A. C. Ulinski. Taxonomy and experimental evaluation of two-handed selection techniques for volumetric
  • data. PhD thesis, University of North Carolina at Charlotte, Charlotte, NC, USA, 2008.
slide-91
SLIDE 91

Summarising Studies

  • Direct interaction

– Tasks for close range interaction seem to be most effective

  • Ray-based interaction

– Best when all objects stay at the same distance from the user

  • Go-go

– Usually less effective – However, supports uniform interaction within the manipulation area

slide-92
SLIDE 92

Summary

  • Introduction to Interaction
  • Models of Interaction

– Extended Desktop Model – Virtual Reality Model

  • Interaction Methods

– Locomotion or Travel Techniques – Selection and Manipulation