COMP4076 / GV07 - Virtual Environments: Tracking and Interaction - - PowerPoint PPT Presentation
COMP4076 / GV07 - Virtual Environments: Tracking and Interaction - - PowerPoint PPT Presentation
COMP4076 / GV07 - Virtual Environments: Tracking and Interaction Wole Oyekoya Department of Computer Science University College London w.oyekoya@cs.ucl.ac.uk http://www.cs.ucl.ac.uk/teaching/VE Outline Introduction Models of
Outline
- Introduction
- Models of Interaction
- Interaction Methods
Introduction
- Introduction
- Models of Interaction
- Interaction Methods
Tracking and Interaction
User Input Devices Computer User Synthetic Environment Real Environment Tracking and interaction happens here
Basic Components of an Interface
- The input devices captures user actions
- Transfer functions / control-display mappings /
interaction techniques
– Map the movement of the device into the movement of the controlled elements – Isomorphic: one-to-one mapping between motions in the physical and virtual worlds – Non-isomorphic: input is shifted, scaled, integrated, …
- The display devices present the effects of the
input to the user
What Are the Basic Interaction Tasks?
- Locomotion or Travel
– How to move through the space
- Selection
– How to indicate an object of interest
- Manipulation
– How to change properties of an object of interest
- Symbolic
– How to enter text and other parameters
- System control
– How to change the way the system is behaving
Won’t be covered in this lecture
Challenges in Designing Metaphors?
- Designing interaction metaphors for virtual
environments is hard:
– Six degrees of freedom
- Lack of appropriate input devices
– Isolated parts of body tracked – Boxing glove or fishing rod style of interaction metaphors
- Divide and conquer the problem by identifying
basic models of interaction
Models of Interaction
- Introduction
- Models of Interaction
- Interaction Methods
Models of Interaction
- Extended Desktop Model
– The computer generates a 3D version of the familiar desktop – The user needs tools to do 3D tasks
- Virtual Reality Model
– The user’s body is an interface to the world – The system responds to everything they do or say
Extended Desktop Model
- The desktop is now in 3D
- It can extend beyond the
boundaries of the computer screen itself
- However, the user:
– is not part of the scene – has a “God’s eye view”
Interaction in the Extended Desktop
- Focus on analysing a task and creating devices
that fit the task
- Study ergonomics of the device and
applicability/suitability for the role
- Special-purpose devices can be developed to
support interaction
2D Interaction in a 3D World: Google Earth
Modelling Package (3D Studio Max)
Types of Device to Enable 3D Interaction
Ascension Wanda 3DConnexion Spaceball Polhemus Isotrak 3-Ball Logitech 3D Mouse 3DConnexion Spacemouse Inition 3DiStick Nintendo Wii Remote
GlobeFish and GlobeMouse
Limitations of the Extended Desktop Model
- 3D tasks can be quite
complicated to perform
- Tasks can become very
specialised
– Counterintuitive – Requires a lot of user training
Fakespace Cubic Mouse
Virtual Reality Model
- Track the user as they
move through a genuine 3D space
- Need to track the user
precisely and interpret what they do
- Focus is on users
exploring the environment
Interaction in the Virtual Reality Model
- Tension between isomorphic and non-isomorphic
movements
– Isomorphic: one-to-one mapping between motions in the physical and virtual worlds – Non-isomorphic: input is shifted, scaled, integrated, …
- Tension between mundane and magical responses of the
environment
– Mundane are where the dynamics are governed by the familiar laws of physics (Newtonian mechanics) – Magical are everything else (casting spells, automatic doors, etc…)
Body-Relative Interaction
Technique based on Proprioception
- Provides
– Physical frame of reference in which to work – More direct and precise sense of control – “Eyes off” interaction
- Enables
– Direct object manipulation (for sense of position of object) – Physical Mnemonics (objects fixed relative to body) – Gestural Actions (invoking commands)
Hand-Held Widgets
- Hold controls in hands,
rather than on objects
- User relative motion of
hands to effect widget changes
Mine, Brooks Jr, Sequin
Gestural Actions
- Head – butt zoom
- Look – at Menus
- Two – handed flying
- Over – the – shoulder
deletion
Mine, Brooks Jr, Sequin
Limitations of the Virtual Reality Model
- Can’t track user over very large
areas
– E.g. Some form of locomotion metaphor will be required for long distance travel (see later)
- Physical constraints of systems
- Limited precision and tracking
points
- Lack of physical force feedback
Overcoming Lack of Force Feedback
- One way to overcome lack
- f force feedback is to use
a haptic device
– Will be discussed in another lecture
- Another approach is to
exploit visual dominance in the interpretation of cues
- CyberForce CyberGrasp
Visual Dominance
- The real hand is not
constrained in space
- The virtual hand can be
constrained in virtual space
- Can the user detect the
difference?
“The Hand is Slower than the Eye: A quantitative exploration of visual dominance over proprioception Burns, Whitton, Razzaque, McCallus, Panter, Brooks
Visual Dominance
Task: playing Simon game Drift between virtual and real hand gradually introduced over time
Summary of Interaction Methods
- The extended desktop model:
– Desktop extends beyond physical screen – Interactions and devices on a case-by-case basis – Potentially more accurate, but counterintuitive and specialised interaction
- The virtual reality model:
– User’s body input to system – Potentially more intuitive but more general – Greater reliance on ability to have natural movement and ability to track – Partially resolved using visual dominance for HMDs
Interaction Methods
- Introduction
- Models of Interaction
- Interaction Methods
Basic Interaction Tasks
- Locomotion or Travel
– How to effect movement through the space
- Selection
– How to indicate an object of interest
- Manipulation
– How to move an object of interest
- Symbolic
– How to enter text and other parameters
- System control
– Change mode of interaction or system state
Won’t be covered in this lecture Logically grouped together
Locomotion
- Introduction
- Models of Interaction
- Interaction Methods
– Locomotion or Travel Techniques – Selection and Manipulation
Purpose of Locomotion
- Change the pose of the viewpoint (both position
and attitude) from some start location A to some end location B
- This is the most fundamental task for a virtual
environment
– Arguably if the user can’t change the pose of the viewpoint, it’s not really a virtual environment at all
Types of Travel Techniques
- There are two fundamentally different types:
- Virtual techniques:
– The user’s body remains stationary even through the viewpoint moves
- Physical techniques:
– The user’s physical motion is used to transport the user through the virtual world
Virtual Locomotion Techniques
- The user remains
stationary even though the viewpoint moves
- Techniques must be used
to specify
– Direction – Speed
Taxonomy of Virtual Locomotion Techniques
Bowman, Koller and Hodges
Taxonomy of Virtual Locomotion Techniques
Bowman, Koller and Hodges
Controlling Direction by Steering
- Continuous specification
- f direction of motion
- Direction can be specified
by many means:
– Gaze-directed – Pointing – Physical device (steering wheel, flight stick)
Steering Techniques
- Gaze-based:
– Actually uses head orientation – Cognitively simple – Cannot look around whilst travelling
- Pointing-based:
– Actually uses on hand orientation – Cognitively more complicated – Makes it possible to look in one direction whilst travelling in another – However, you can’t hold other
- bjects or manipulate them whilst
travelling
Target-Based Steering
- Specify discrete target or goal
- Multiple ways to specify the
goal:
– Point at object – Choose from list – Enter coordinates
- Once specified, travel to the
target is passive
- Convenient way to get from A
to B, but inflexible
Route-Based Steering
- Generalisation of the
target-based metaphor
- User specifies multiple
waypoints on a map
- The system generates a
path and passively moves the user along the path
- Placement of waypoints
controls granularity of movement
Taxonomy of Virtual Locomotion Techniques
Bowman, Koller and Hodges
Two-Handed Velocity / Acceleration Selection
- Line between hands defines
direction of travel
- Distance between hands
specifies constant speed
- Pinch glove gestures provide
“nudges”
Mine, Brooks Jr, Sequin
“Grabbing the Air”
- Use hand gestures to move
through the world
- Metaphor is pulling a rope
- Often implemented using
pinch gloves
- Physically occupies hands
- Slow
- Fatiguing
Camera-in-Hand
- A tracker is held in the hand and the camera viewpoint is
slaved to it using an appropriate set of transformations
- This defines a small “workspace” on a table top
- Travel simply involves movement of the hand through the
workspace
How Good Are These Techniques?
- Unfortunately there have been no thorough
comparison of all the different techniques which have been developed
- One of the first rigorous studies of some of the
trade-offs between different travel techniques
– Travel in Immersive Virtual Environments: An Evaluation of Viewpoint Motion Control Techniques, Bowman, Koller and Hodges
Quality Factors
- 1. Speed (appropriate velocity)
- 2. Accuracy (proximity to the desired target)
- 3. Spatial Awareness (the user’s implicit knowledge of his position and
- rientation within the environment during and after travel)
- 4. Ease of Learning (the ability of a novice user to use the technique)
- 5. Ease of Use (the complexity or cognitive load of the technique from the
user’s point of view)
- 6. Information Gathering (the user’s ability to actively obtain information
from the environment during travel)
- 7. Presence (the user’s sense of immersion or “being within” the
environment)
Experiment 1: Absolute Motion Task
- Absolute motion task
– Move until you are inside a virtual sphere – Gaze v. Point AND constrained v. unconstrained
- Constrained disallowed movement in z direction
– Hypothesis - gaze expected to be better
- Neck muscles are more stable
- More immediate feedback
- Eight subjects, each doing four times 80 trials (five times 4
distances to target, four target sizes)
Experiment 1
Bowman, Koller and Hodges
- Results
- No difference between techniques
- Significant factors were target distance and size
Experiment 2: Relative Motion Task
- Relative motion task
- Move to a position
defined relative to a virtual object
- Need forward and
reverse direction
- Nine subjects, four sets
- f 20 trials
Bowman, Koller and Hodges
Experiment 2
- Obvious difference
- Can’t point at target and look at departure
point simultaneously
Bowman, Koller and Hodges
Summary of First Two Experiments
Bowman, Koller and Hodges
Experiment 3: Spatial Awareness Task
- Environment consists
- f cubes of contrasting
colours
- User asked to identify
cube and push L or R
- n mouse button
- User travels to a
different location and repeats the trial
Bowman, Koller and Hodges
Experiment 3
- Testing spatial awareness based on four travel
variations
– Constant speed (slow) – Constant speed (fast) – Variable speed (smooth acceleration) – Jump (instant translation)
- Concern is that jumps and other fast transitions
will confuse users
Experiment 3
- Slow and fast velocity
made no difference to task time
- However, jumping from
point to point lead to significant disorientation and increased task execution time
Virtual Locomotion Summary
- Virtual locomotion or travel occurs when the
viewpoint changes but there is minimal physical interaction from the user
- The interaction metaphors have to specify the
direction, velocity and input conditions
- Many metaphors have been proposed
- Few have been analysed:
– Pointing is better for relative tasks – Jumping from location to location is disorienting
- An alternative is to have physical locomotion
Physical Locomotion
- Map locomotion “as naturally as possible” to
human movement
Direct Locomotion
- There is an isomorphic
mapping between the real- world and the virtual environment
- The user physically walks
from point A to point B
- Intuitive and easy to use
- Requires a suitably large
environment
- Requires a wide area
tracking system
Walking-in-Place
- User “walks in place”
- Movement detected by
gait analysis
– Trackers just on feet – Trackers over entire body
- No perceptual mismatch
– Deep down, user knows that their gait won’t make them move
Which Works Best?
- Despite the huge number of different approaches
which have been developed, relatively few studies have compared the strengths and weaknesses of different approaches
- Travel techniques studied by Bowman et al.
- Comparison of real walking, virtual walking and
flying by Usoh et al.
Walking, Virtual Walking and Flying
- Comparison of real walking, walking in place and
gaze-based flying
- Test scenario was the pit
– Get around the edge to the chair
- 33 naïve subjects and 11 expert subjects recruited
Walking > Walking-in-Place > Flying, in Virtual Environments, Usoh, Arthur, Whitton, Basto, Steed, Slater, Brooks
Results of the Study
Real vs. Virtual Walking vs. Flying
- Real walking
– Best for human-scale spaces – Expensive to implement
- Virtual walking
– Better than flying – Inexpensive to implement
- Limitations of study:
– Walking-in-place implementation was poor – Avatar realism – Scenario incongruity
Redirected Walking
Fire drill scenario
Redirected Walking, Razzaque, Kohn, Whitton
Based on the premise that rotating the virtual scene around a user makes the user turn themself and navigate a much larger area without the user’s awareness
Redirected Walking
Plan view of scenario Plan view of lab area
Redirected Walking
Redirected Walking
- Apply an angular
distortion:
– Small constant offset causes viewpoint to slowly rotate – Increase distortion rate proportionally with user’s walking speed – Increase distortion rate proportionally with user’s angular velocity of head
Redirected Walking in the CAVE
- Problems with walking in
the CAVE:
– You eventually hit the walls – You can turn and see the missing back wall
- One means of countering
this is to rotate the environment
– The user is directed back to the front wall
Redirected Walking in Place, Razzaque, Swapp,Slater, Whitton, Steed
Redirected Walking in the CAVE
- Apply a small rotation to the scene to
cause user to turn towards centre
– Sufficiently small that not consciously noticed – Subject responds to maintain balance
- Increase rate when user is navigating or
rapidly turning head
- Results:
– Variance in number of times user saw back wall decreased – Rates of simulator sickness were not increased – Some users did not notice the rotation
Constrained Walking
- User walks but motion is
constrained
– VirtuSphere – Treadmills
- However, most forms can
be very difficult to use
– Mismatch in perceptual cues – Dynamics / inertia of device make it hard to navigate effectively
VirtuSphere
Summary of Experimental Results
- There is no single best virtual locomotion
technique
– Context dependent – Graceful transitional motions should be used if subjects should understand context of environment
- Physical location provides greatest presence
– Only works for human-sized spaces – Needs lots of room!
Manipulation
- Introduction
- Models of Interaction
- Interaction Methods
– Locomotion – Manipulation
Manipulation
- Manipulation of the environment consists of
changing the contents of the virtual environment
- The term is extremely broad:
– Complex manipulation of an object’s structure, such as moulding or sculpting – Changing abstract properties of an object such as its
- wnership
- We consider rigid body manipulation tasks only
– Operations such as scaling or distorting are frequently implemented as rigid body manipulations of widgets
Canonical Manipulation Tasks
- Selection
– Task of acquiring or identifying a particular object from the set of objects available
- Positioning
– Change the real-world position of an object
- Rotation
– Change the attitude of an object
- This has a direct analogy with 2D GUIs
Taxonomy of Manipulation
Direct Interaction Techniques
- User selects and
manipulates an object using a virtual hand
- A 3D cursor visualises the
current locus of user input
- The user intersects the
cursor with an object and uses a trigger technique
- The object is rigidly
attached to the user’s hand
Simple Virtual Hand
- There is a direct mapping from the user’s hand to the movement
- f the virtual hand
– Linear displacement often scaled – Orientation is not
- Mapping from inputs to hand movement is achieved through a
transfer function
- However, if users have to manipulate an object out of arm’s
reach, they have to travel to get it
Go-Go Interaction
- Nonlinear mapping
between the user’s hand and the virtual hand
- Close in, the hand behaves
like a simple virtual arm
- Further out, it becomes
nonlinear scaled
- Users generally found it
easy to understand and use for manipulation
Poupyrev et al. / Egocentric Object Manipulation in VEs: Empirical Evaluation of Interaction Techniques
World in Miniature Technique
- Rather than scale the user’s
hand, scale the world
- Smaller version of the world
created and superimposed on the real world
- User controls WIM using
hanheld ball
- Can interact with environment by
selecting 1:1 scale or same
- bject on WIM
World in Miniature, Stoakley and Pausch
Indirect Interaction Techniques
- User works with objects
beyond arms reach
- Points at object with their
hand
- Selects by pressing a
button or some other discrete action
- Object then becomes tied
to the user in some way for manipulation
Ray-Based Interaction
- Ray-Based
– Ray is centred on user’s hand – All manipulations are relative to hand motion
- Translation in beam direction is
hard
- Rotation in local object
coordinates is nearly impossible
- Picking small objects in the far
distance is hard because a very high degree of angular accuracy is required
Mark Mine, http://www.cs.unc.edu/~mine/isaac.html
Aperture Technique
- The angle subtended by the
selection cone can be changed by the user
– Head and hand tracked – Line through head and hands defines cone direction – Distance between head and hands defines angle of the cone
- The rotation of the hand sensor
can be used to perform further disambiguation
– User orients hand held wand to align with orientation of object to be picked up
Image-Plane Technique
- User selects 3D
- bjects by
manipulating their 2D projection on a virtual image plane
- If single hand tracked,
aperture technique
- If multiple hands
tracked, head crusher technique
Flashlight Technique
- Rather than use a
single ray for selection, selection occurs over a selection cone
- Disambiguation
strategies include:
– Object closest to centre line selected – Object closest to user selected
Hybrid Techniques
- All manipulation events are based on a repeated
task sequence of selection followed by manipulation
- Therefore, different tasks can use different
interaction metaphors
- Each metaphor can be optimised for the particular
task in the sequence it’s been assigned to do
Scaled World Grab
- The user selects an object using an appropriate image
plane-based technique
- Once selected, the whole world is scaled down to the bring
the object in the user’s reach
- The scaling is carried about a point midway between the
user’s eyes
– The user does not notice the scaling operation because the world does not change visually
- Near and far objects easily moved
- However, slight head movements can massively change
the view of the model
- Problems taking a small, close model and moving it further
away
“Over-generalized findings from other designer’s experiences are more apt to be right than the designer’s uniformed intuition” (Brooks, 1988)
What’s Best?
- Only a few studies of different interaction
techniques have been completed
- Confounding factors:
– Internal validity: the experiment actually tests the property you want to test – External validity: the results of the experiment can be generalised to other scenarios where experiments have not been conducted
Evaluating Selection
- Ray-casting and image-plane are generally more effective
than Go-Go
– Exception: high precision selection, e.g. small or far away objects, can be easier with Go-Go
- Different studies get significant differences in performance:
– Poupyrev: the difference between Go-Go and pointing was not large (10 to 20 percent) – Bowman: the difference between Go-Go and pointing was significant (20 to 60 percent) – Probably due to differences in implementation
- Ray-casting techniques can be approximated as 2D
techniques
Evaluating Manipulation
- Very difficult to do:
– Large amount of variables affecting user performance: direction of movement, distance,accuracy
- Preliminary positioning experiments indicate that:
– Ray is effective for repositioning at a constant distance and within the user reach – Go-Go and scaled world grab have been reported effective in some positioning tasks
- Mine independently studied effects of proprioception on
manipulation
Docking Cube Experiment
- Align docking cube with target cube as quickly as
possible
- Comparing three manipulation techniques:
– Object in hand – Object at fixed distance – Object at variable distance (scaled by arm extension)
- In hand significantly faster
Shaping 3D Boxes Experiment
- Study of two handed selection of volumetric data
for box creation using 3D tracking equipment:
– Hand-on-Corner (HOC): User’s non-dominant hand holds one corner of the box while the dominant hand controls the position of the opposite corner. – Hand-in-Middle (HIM): Similar to HOC except that non-dominant hand is positioned in the middle. – Two Corners (TC): User shapes a box by dragging apart two diagonally opposite corners of the box.
- TC outperforms others with respect to both
accuracy and completion times.
- A. C. Ulinski. Taxonomy and experimental evaluation of two-handed selection techniques for volumetric
- data. PhD thesis, University of North Carolina at Charlotte, Charlotte, NC, USA, 2008.
Summarising Studies
- Direct interaction
– Tasks for close range interaction seem to be most effective
- Ray-based interaction
– Best when all objects stay at the same distance from the user
- Go-go
– Usually less effective – However, supports uniform interaction within the manipulation area
Summary
- Introduction to Interaction
- Models of Interaction
– Extended Desktop Model – Virtual Reality Model
- Interaction Methods