Physical and Virtual Objects b Andrew T Stull Andrew T. Stull - - PowerPoint PPT Presentation

physical and virtual objects b
SMART_READER_LITE
LIVE PREVIEW

Physical and Virtual Objects b Andrew T Stull Andrew T. Stull - - PowerPoint PPT Presentation

Physical and Virtual Objects b Andrew T Stull Andrew T. Stull Department of Psychological & Brain Sciences University of California, Santa Barbara ThinkSpatial Brown Bag 2/14/12 Thank you! Mary Hegarty Trevor Barrett Rich


slide-1
SLIDE 1

Physical and Virtual b Objects

Andrew T Stull Andrew T. Stull

Department of Psychological & Brain Sciences University of California, Santa Barbara

ThinkSpatial Brown Bag 2/14/12

slide-2
SLIDE 2

Thank you!

  • Mary Hegarty

Ri h M

  • Trevor Barrett
  • Rich Mayer
  • Russ Revlin
  • Jack Loomis
  • Bailey Bonura
  • Jana Ormsbee
  • Taylor Davis
  • Jack Loomis
  • Bonnie Dixon
  • Mike Stieff
  • Taylor Davis

Mike Stieff

2

slide-3
SLIDE 3

Physical and Virtual Objects

Research:

M lti di l i

  • Multimedia learning
  • How do we design instructional material that promote meaningful

learning?

  • Small-scale spatial cognition

H d ti l bilit ff t l i ith bj t ?

  • How does spatial ability affect learning with objects?
  • Human-computer interactions

Human computer interactions

  • How do we design productive interactions?

3

slide-4
SLIDE 4

Physical and Virtual Objects

Studies:

Ph i l bj t

  • Physical objects
  • Chemistry models and representational translation
  • When used as an intermediary, models are helpful

y p

  • Virtual objects
  • Anatomy learning and orientation references
  • Salient visual cues help to resolve disorientation
  • Future Directions
  • Cognitive and perceptual differences

4

  • Design of the interface matters
slide-5
SLIDE 5

Physical Objects Physical Objects

  • Our hand is an interface to the world
  • Our hand is an interface to the world.
  • Action performed in the world can help us think.

Ki h & M li (1994) G & F (2004) Kirsch & Maglio (1994); Gray & Fu (2004)

5

slide-6
SLIDE 6

Chemistry Models

Draw a Dash-Wedge diagram

Chemistry Models

Draw a Dash Wedge diagram.

CH3

Cl H3C

H3C H HO H

HO H Cl H3C

Cl H3C H

H CH3

slide-7
SLIDE 7

Chemistry Models

Are models helpful?

Chemistry Models

How?

Dash-Wedge Newman Fischer Dash-Wedge Newman Fischer Task: Translate one to another IV: Model vs. No Model

slide-8
SLIDE 8

Models vs. No Models

  • Participants – 64 organic chemistry undergrads (35 female)
  • Stimuli & Task

3 diagrams and 1 model (18 trials)

  • Stimuli & Task – 3 diagrams and 1 model (18 trials)
  • Design: Between subjects (Models vs. No Models)

– Models group

  • models provided and use was encouraged
  • models not aligned with starting diagram

– No Models group did not receive models

  • Measures:

– Drawing accuracy – Model use behaviors – Spatial ability (MRT) – Experience

  • Trials recorded on video for later coding

Trials recorded on video for later coding

  • Groups did not differ on spatial ability or organic chemistry experience.
slide-9
SLIDE 9

Models vs. No Models

D i

  • People with models drew

more accurate translations th l ith t d l

Group N Drawing accuracy M (SE) Models 32 .40 (.05)

than people without models.

– F(1,62) = 5.04, p = .028*, d = 0.56

( ) No Models 32 .26 (.04)

0.4 0.5

  • n

Drawing accuracy

  • Groups did not different on spatial ability

i

0.1 0.2 0.3

M d l N M d l

Proporti

  • r experience.

Model No Model

slide-10
SLIDE 10

Models vs. No Models

  • Encouraged use of models

g

  • 87% of Ps (28 of 32 Ps) used the models
  • 4 Ps used the model on every trial
  • Types of model use
  • move (any)

(28 Ps, 44% of trials) ( y) ( , % ) – align to start (26 Ps, 24% of trials) – align to target (24 Ps, 35% of trials) – reconfigure (15 Ps, 22% of trials)

  • Classification of people as users or non-users
  • Classification of people as users or non-users
slide-11
SLIDE 11

Models vs. No Models

  • Drawing accuracy

Group N Drawing accuracy

– F(2,61) = 18.59, p < .01* – Users vs. No Model

  • t(61) = 5.65, p < .01*, d = 1.52

Users vs Non users

M (SE) Models (all) 32 .40 (.05) User 13 66 ( 06)

– Users vs. Non-users

  • t(61) = 5.47, p < .01*, d = 1.69

– Non-users vs. No Model

  • t(61) = 0.41, p < 1.0

User 13 .66 (.06) Non-user 19 .23 (.05) No Models 32 .26 (.04)

  • Users and Non-users were divided

by 50% align to target.

0.8

Drawing accuracy

  • Groups did not different on spatial

ability or experience.

0.2 0.4 0.6 Proportion User Non-user No Models

slide-12
SLIDE 12

O O

Models vs. No Models

O O O O O O O

Parts Centers Order

O

  • Level 0

X

  • Level 1

√ X √ √

  • Level 2

√ √ 2

  • Level 2.5

√ √ 1 L l 3 √ √ √

  • Level 3

√ √ √

slide-13
SLIDE 13

Models vs. No Models

  • Proportion of people performing 66.7% at level or better

p p p p g

0.6 0.8 1.0

  • f Ps

Model No Model 0.0 0.2 0.4 1 2 2.5 3

Prop

No Model 0 4 0.6 0.8 1.0 p of Ps Users Non-users No Model 0.0 0.2 0.4 1 2 2.5 3 Pro

Level

No Model

Level

  • Most level 3 Ps were model users
slide-14
SLIDE 14

Models vs. No Models

Correlations

  • Accuracy is correlated with spatial ability
  • (r = .32, p = .01*)
  • M d l

i l t d ith

  • Model use is correlated with accuracy
  • move (any)

(r = .74, p < .01*) – align to start (r = .54, p < .01*) – align to target (r = 84 p < 01*) align to target (r .84, p < .01 ) – reconfigure (r = .66, p < .01*)

  • Model use is correlated with spatial ability

p y

  • move (any)

(r = .33, p = .03*) – align to start (r = .27, p = .07) – align to target (r = .33, p = .03*) reconfigure (r = 40 p = 01*) – reconfigure (r = .40, p = .01*)

slide-15
SLIDE 15

Models vs. No Models

  • Together, spatial ability, align start, align target, and

reconfigure explain 72% of variance in drawing accuracy

  • R = .85; F(4,27) = 16.90, p < .01*)

( ) p )

  • Partial regression coefficients:
  • spatial ability:

β = 01 p = 92 sr2 < 01 spatial ability: β .01, p .92, sr < .01

  • align to start:

β = -.16, p = .32, sr2 = .01

  • align to target: β = .82, p < .01*, sr2 = .27
  • reconfigure:

β = 17 p = 32 sr2 = 01

  • reconfigure:

β = .17, p = .32, sr = .01

  • Only Align to Target uniquely predicted accuracy after

controlling for the other variables controlling for the other variables.

slide-16
SLIDE 16

Also Also

  • Providing models aligned to the given

Providing models aligned to the given diagram is not helpful

  • Training students to relate the model to
  • Training students to relate the model to

the given diagram is not helpful H i t d t “di ” th t th

  • Having students “discover” that they are

wrong when not using the model is helpful

slide-17
SLIDE 17

Physical Objects Physical Objects

Summary Summary

  • Students should use the models.

– Most don’t without encouragement. Most don t without encouragement.

  • When wrong, most students are close.
  • Spatial ability is a predictor of accuracy

Spatial ability is a predictor of accuracy.

  • Model use may compensate for spatial ability.
slide-18
SLIDE 18

Virtual Objects Virtual Objects

Technology is rapidly replacing traditional material. Low-spatial individuals may be especially burdened. (Garg, Norman, Eva, Spero, & Sharan, 2002) Di i i i f l h h i l Disorientation is common for some people when they use virtual

  • bjects (Cohen & Hegarty, 2007; Keehner et al., 2008)

18

slide-19
SLIDE 19

Virtual Objects

Same or Different?

19

slide-20
SLIDE 20

Virtual Objects

Procedure

Spatial Ability Measure Training & Practice Object Manipulation

error & efficiency

Anatomy Posttest

feature recognition

20

feature recognition (learning measure)

slide-21
SLIDE 21

Virtual Objects

Training (5 min) t f b – anatomy of bone – 2-page paper booklet

Spinous process Superior ti l

booklet – 5 anatomical features

articular process Transverse

Practice (3 min) i t f ti

Inferior articular Transverse f process

– interface practice – review anatomy on 3D computer model

process foramen

21

3D computer model

slide-22
SLIDE 22

Virtual Objects

Object Manipulation (orientation matching)

Orientation References Control

Transverse Transverse foramen foramen

Start

vs.

Transverse foramen Transverse foramen

Target

22

foramen foramen

slide-23
SLIDE 23

Virtual Objects

When interacting with virtual objects, learners are frequently disoriented. frequently disoriented.

Do orientation references reduce disorientation? D th i l i ? Do they improve learning? How do these factors interact with a learner’s interact with a learner s spatial ability?

23

Stull, Hegarty, & Mayer, 2009

slide-24
SLIDE 24

Virtual Objects

  • Design : Between subjects

– Orientation References (38) vs Control (37) – Orientation References (38) vs. Control (37) – High Spatial (36) vs. Low Spatial (39)

  • Measures:

– Object Manipulation:

  • error (deg) – success
  • directness (deg x sec) – efficiency

– Anatomy Posttest:

  • feature recognition (prop correct)
  • feature recognition (prop correct)
  • Groups did not differ on spatial ability or organic chemistry

experience

24

experience.

slide-25
SLIDE 25

Virtual Objects

Manual: error (deg) Manual: error (deg)

Participants who used ORs

Error

.070

were more accurate.

F(1, 71) = 7.62, p = .01*, d = 0.63

Spatial ability significantly predicted accuracy.

F(1,71) = 5.32, p = .02*, d = 0.46 lower is better

25

slide-26
SLIDE 26

Virtual Objects

Manual: directness (deg x sec)

Participants who used ORs

Directness

Manual: directness (deg x sec)

.001*

were more direct.

F(1, 71) = 20.02, p < .001*, d = 0.86

Spatial ability significantly predicted directness.

F(1 71) 24 50 001* d 0 79 F(1,71) = 24.50, p < .001*, d = 0.79 lower is better

26

slide-27
SLIDE 27

Virtual Objects

Learning: feature recognition (prop correct)

High spatial Ps learned well

Feature Recognition

Learning: feature recognition (prop correct)

.02*

g p with or without ORs.

OR x SA: F(1,71) = 5.92, p = .02*

Low spatial Ps who used ORs learned more with th ith t OR than without ORs.

F(1,71) = 6.27, p = .02*, d = 0.76

higher is better

27

slide-28
SLIDE 28

Virtual Objects

The shape of the object may hurt performance

Are learners making a symmetry error?

Example of a symmetry error

28

(error >120°)

slide-29
SLIDE 29

Virtual Objects

Manual: Symmetry Errors

  • Symmetry errors were

significantly more common for the control group (74% of errors group (74% of errors >120° are symmetry errors).

F(1,73) = 7.16, p < .01*

  • ORs may have helped

to disambiguate the shape of the object. p j

  • All trials for all Ps are

represented.

29

slide-30
SLIDE 30

Virtual Objects

Same/Different decision: 0°/0°

 2 images can be the same (0°) or vary by 30°, 60°, or 180°

0°/30°

 80 trials (20 orient. x 4 angles)

0°/60°

 0°/180° is the key comparison  Record eye gaze information

0°/180°

 Record eye gaze information  Record accuracy of decision

30

slide-31
SLIDE 31

Virtual Objects

Accurate (10 best Ps) vs. Inaccurate (10 worst Ps) E d fi ti b l f di it Error and eye fixation by angle of disparity.

31

slide-32
SLIDE 32

Virtual Objects

Summary

  • Some people are easily disoriented with virtual objects.
  • Strong visual cues (ORs) reduce disorientation when

manipulating virtual objects.

  • Stereo depth cues are also helpful

Hi h ti l P d ll ith ith t i l

  • High spatial Ps do well with or without visual cues
  • Learning interacts with spatial ability.
  • Low spatial learners are particularly helped by ORs
  • Low spatial learners are particularly helped by ORs

32

slide-33
SLIDE 33

Interacting with Virtual Objects Interacting with Virtual Objects

When working with computers including virtual reality our When working with computers, including virtual reality, our primary interface (hand) is filtered by a secondary interface. H d thi h i l “filt ” f th i t f ff t How does this physical “filter” of the interface affect user performance?

33

What are the important cognitive and perceptual factors?

slide-34
SLIDE 34

Physical and Virtual Objects y j

Chemical models

  • VR system designed to match the physical experience

VR system designed to match the physical experience

– Stereo glasses – Co-located interface

  • Interface allows for important interactions

Interface allows for important interactions

– Object rotation – Bond rotation

Monitor Monitor Mirror The virtual model image and the interface were co-located VR Interface

slide-35
SLIDE 35

Physical and Virtual Objects

  • Current:

y j

Current:

– Comparing use of VR models to Physical models

  • May be seeing a facilitation from using VR
  • Constrained interaction
  • Future:

– Evaluate

  • Co-location of the interface and image

S d h

  • Stereo depth cues
  • Size and shape of the interface
  • Types of interaction
slide-36
SLIDE 36

Thank you!

  • Mary Hegarty

Ri h M  Trevor Barrett

  • Rich Mayer
  • Russ Revlin
  • Jack Loomis
  • Bailey Bonura
  • Jana Ormsbee
  • Taylor Davis
  • Jack Loomis
  • Bonnie Dixon
  • Mike Stieff
  • Taylor Davis

Mike Stieff

36