On Visual Studies I 707.031: Evaluation Methodology Winter 2015/16 - - PowerPoint PPT Presentation

on visual studies i
SMART_READER_LITE
LIVE PREVIEW

On Visual Studies I 707.031: Evaluation Methodology Winter 2015/16 - - PowerPoint PPT Presentation

On Visual Studies I 707.031: Evaluation Methodology Winter 2015/16 Eduardo Veas Research Projects Augmented Data RT assistance and instructions record/replay instructions from an expert assist non-expert with instructions LOD


slide-1
SLIDE 1

On Visual Studies I

707.031: Evaluation Methodology Winter 2015/16

Eduardo Veas

slide-2
SLIDE 2

Research Projects

  • Augmented Data
  • RT assistance and instructions
  • record/replay instructions from an expert
  • assist non-expert with instructions
  • LOD for real-time instructions
  • Augmented Knowledge Spaces
  • Use space to organize and interact with

technology

  • Investigate novel technologies.

2

slide-3
SLIDE 3

The Human Vision

3

How it works (when it does)

slide-4
SLIDE 4

Model Human Processor

4

Source: Card et al 1983

slide-5
SLIDE 5

5

Perception vs. Cognition


Perception

  • Eye, optical nerve,

visual cortex

  • Basic perception
  • First processing
  • (edges, planes)
  • Not conscious
  • Reflexes

Cognition

  • Recognizing objects
  • Relations between
  • bjects
  • Conclusion drawing
  • Problem solving
  • Learning
slide-6
SLIDE 6

Model Human Processor (3): Perception

  • encodes input in a physical

representation

  • stored in temp. visual /

auditory memory

  • new frames in PM activate

frames in WM and possibly in LTM

  • Unit percept: input faster

than Tp combines

6

slide-7
SLIDE 7

Human Visual System

  • 6.5 mio cones

– dense in the center – 3 cone types (rgb) – 3 opponent color channels (bw, rg, by)

  • Fovea: 27 times the

density

– responsible for sharp central vision

  • 118.5 mio rods

– black/white

7

slide-8
SLIDE 8

Color perception

8

Title Text

slide-9
SLIDE 9

9

Color Reception, Composition

  • We have three distinct color receptors (cones)

– three-dimensionality of the color space – that‘s why we have three primary colors – also evident in color models, e.g., RGB and CMY

  • Color composition

– additive (e.g., RGB)

  • light
  • white: all three cones stimulated 


with same intensity, 
 at high brightness

– subtractive

  • pigment (e.g., CMYK)
slide-10
SLIDE 10

10

Color Perception

Blahblahblah……… 3 opponent color channels (bw, rg, by). There are 6 colors arranged perceptually as

  • pponent pairs along 3 axes (Hering ’20):

L = long, M = medium, S = short wavelength receptors

slide-11
SLIDE 11

11

Simultaneous Brightness Contrast

  • The perceived brightness of an object is

relative to it‘s background

slide-12
SLIDE 12

12

Color

  • Color vision is irrelevant to much of

normal vision!

– does not help to perceive layout of objects – how they are moving – what shape they are

  • Color breaks camouflage (Tarnung)
  • Tells about material properties (judging

quality of food)

slide-13
SLIDE 13

13

Color Blindness

  • 10% of males, 1% of females (probably due to X-

chromosomal recessive inheritance)

  • Most common: red-green weakness / blindness
  • Reason: lack of medium or long wavelength

receptors, or altered spectral sensitivity (most common: green shift)

Normal Color Perception Deuteranopia (no green receptors) Protanopia (no red receptors)

slide-14
SLIDE 14

14

Ishihara Color Blindness Test

slide-15
SLIDE 15

15

Ishihara Color Blindness Test

slide-16
SLIDE 16

16

Ishihara Color Blindness Test

slide-17
SLIDE 17

17

Ishihara Color Blindness Test

slide-18
SLIDE 18

18

Ishihara Color Blindness Test

slide-19
SLIDE 19

19

Ishihara Color Blindness Test

slide-20
SLIDE 20

20

Ishihara Color Blindness Test

slide-21
SLIDE 21

Visual Perception

21

A construction site

slide-22
SLIDE 22

Human Visual System

  • 6.5 mio cones

– dense in the center – 3 cone types (rgb) – 3 opponent color channels (bw, rg, by)

  • Fovea: 27 times the

density

– responsible for sharp central vision

  • 118.5 mio rods

– black/white

22

slide-23
SLIDE 23

23

Human Visual System

  • Vision: sequence of fixations and

saccades

– fixations: maintaining gaze on single location (200-600 ms) – saccades: moving between different 
 locations (20-100 ms)

  • Vision not similar to a camera

– More similar to a dynamic and 


  • ngoing construction project
slide-24
SLIDE 24

What decides where we look?

24

  • n inherited knowledge
slide-25
SLIDE 25

Visual Saliency: perceptual selection

  • perceptual

quality that makes some items in the world stand out from their neighbors (Itti)

  • bottom-up and top-

down contributions

25

slide-26
SLIDE 26

Computed Saliency vs Eye Tracked Attention

26

slide-27
SLIDE 27

Feature Integration

27

Reconstructing objects

slide-28
SLIDE 28

28

Feature Integration Theory

IDENTIFY PRIMITIVES COMBINE PRIMITIVES PERCEIVE OBJECT COMPARE MEMORY MEMORY PREATTENTIVE STATE FOCUSED ATTENTION PERCEPTUAL COGNITIVE Legibility

Spatial interpretation

Navigation Noticeability Density Clutter

slide-29
SLIDE 29

29

Feature Integration Theory II

slide-30
SLIDE 30

Preattentive Processing

  • Properties detected by the low-level visual

system

– very rapid – very accurate – processed in parallel

  • 200-250 milliseconds
  • Independent of the number of distractors!
  • Opposite: sequential search (processed serially)

30

slide-31
SLIDE 31

Pre attentive features

31

Fast attractors

slide-32
SLIDE 32

Difference in Hue

32

Examples online!

slide-33
SLIDE 33

33

Difference in Curvature / Shape

slide-34
SLIDE 34

34

Not Valid for Combinations

  • Conjunction Targets – no unique visual property
  • target: red, circle
  • distractor objects have both properties
slide-35
SLIDE 35

35

Some Preattentive Properties

  • rientation

length closure size curvature density hue hue flicker direction of motion

slide-36
SLIDE 36

36

Tasks

  • target detection

– detect the presence or absence of a target

  • boundary detection

– detect a texture boundary between two groups of elements, where all

  • f the elements in each group have a common visual property
  • region tracking

– track one or more elements with a unique visual feature as they move in time and space

  • counting and estimation:

– users count or estimate the number of elements with a unique visual feature.

slide-37
SLIDE 37

37

Tasks

Number Estimation Boundary Detection

slide-38
SLIDE 38

38

Hierarchy of Preattentive Features

Examples online!

slide-39
SLIDE 39

39

Theories of Preattentive Processing

  • Not known for sure how it works
  • Several theories:

– http://www.csc.ncsu.edu/faculty/healey/PP/index.html

slide-40
SLIDE 40

EYE TRACKING

40

slide-41
SLIDE 41

Eye-Tracking

  • Measurement and study of eye movements

41

slide-42
SLIDE 42

Functions of eye movement

  • get the fovea to the

interesting information (fixations)

  • keep image stationary in

spite of movements of the

  • bject or from the head

(smooth pursuit)

  • prevent objects from

perceptually fading (refresh, microsaccades)

42

slide-43
SLIDE 43

Eye Tracking as Measure of Human Behavior

  • aim: identify patterns in the deployment of visual

resources when performing a task

  • combining features into perception requires

focus of attention

  • the more complicated, confusing or interesting,

the longer it takes to “form a picture”

43

slide-44
SLIDE 44

Tracked Eye Movements

44

Task Typical mean fixation duration (ms) Mean saccade size (degrees) Silent Reading 225-250 2 (8-9 letter spaces) Oral Reading 275-325 1.5 (6-7 letter spaces) Scene Perception 260-330 4 Visual Search 180-275 3

slide-45
SLIDE 45

Additional measurements: Eye Tracking

  • Fixation: dwell time on a given area
  • Saccade: quick movement of the eye between

two points

  • Scan path: directed path of saccades and fixations
  • How to use these measurements?

45

slide-46
SLIDE 46

Additional measurements: Eye Tracking

Processing measures

– Fixation count – Location of fixations – Duration of fixations – Cumulative fixation time – Cluster analysis (attention sinks) – AOI / normalized dividing by all fixations

46

slide-47
SLIDE 47

Additional measurements: Eye Tracking

Search measures

  • Scanpath length (distance

between gaze point samples, ending at target)

  • Spatial density

(distribution of gazepoints and scanpaths)

  • Number of saccades

47

slide-48
SLIDE 48

Eye Tracking Experiments

48

Title Text

slide-49
SLIDE 49

Eye Tracking Experiment: Phase I

  • 1. Learn about ET knowledge, find relevant

previous ET studies

  • 3. Research design: establish research question and

analysis metrics

  • 4. Pilot: [go to next page] and return to 1 if

needed

49

slide-50
SLIDE 50

Eye Tracking Experiment: Phase II

  • Prepare your data
  • Test hardware
  • Prepare room and setup
  • Pilot [return to draft?]
  • Execute
  • Calibration
  • Training
  • Calibration ?
  • Testing
  • Analyze

50

slide-51
SLIDE 51

Eye Tracker Experiment: research questions

  • Noticeability: time till the

first fixation on object of interest.

  • Attention: fixation

duration, dwell time on the object of interest

  • Reaction: time between

fixation and click

51

slide-52
SLIDE 52

Eye Tracker Experiment: preparations

  • Setup testing

environment

  • Design tasks and

matched analysis metrics

  • Design study battery:

(intro, calibration, training, [calibration], test, questionnaires)

52

slide-53
SLIDE 53

Eye Tracker Calibration

  • Device
  • Characterize the

user’s eyes

  • Match internal model

(cornea, fovea)

  • User
  • Look and follow

targets

  • Experimenter
  • keep cool and repeat

when needed

53

slide-54
SLIDE 54

Example:DAIMsVSM

54

Title Text

slide-55
SLIDE 55

How do we highlight objects in AR?

2 VEAS - MENDEZ - FEINER - SCHMALSTIEG

slide-56
SLIDE 56
  • CAN WE BE MORE SUBTLE?

3

How do we highlight objects in AR?

DIRECTING ATTENTION AND INFLUENCING MEMORY WITH VISUAL SALIENCY MODULATION

slide-57
SLIDE 57

Eye Tracking: Example

Directing Attention and Influencing Memory with Visual Saliency Modulation [veas et al. 2011]

  • Characterize bottom-up attention: driven by exogenous
  • cues. (100 ms ~ 250ms)
  • Measure deployment of attention before and after

applying a SMT.

  • Measure deployment of memory after experiencing

stimuli (modulated and not)

57

slide-58
SLIDE 58

Eye Tracking: Example

Directing Attention and Influencing Memory with Visual Saliency Modulation

  • Motivation: contend that saliency modulated

stimuli affects the deployment of mental resources, bottom-up (attention), top-down (memory)

  • If modulation is subtle enough, the effect is not

perceivable

58

slide-59
SLIDE 59

Experiments

  • 1. Awareness of modulation

does it work imperceptibly?

  • 2. Attention direction

does it successfully attract attention to the selected regions?

  • 3. Memory influence

does it successfully increase recall ?

11 VEAS - MENDEZ - FEINER - SCHMALSTIEG

slide-60
SLIDE 60

Experiments

60

Awareness first pilot Awareness second pilot Attention Condition 1 Attention Condition 2 Memory Condition 1 Memory Condition 2 Awareness formal study Thresholds Regions for modulation Modulated Videos

slide-61
SLIDE 61

Eye Tracking Example

Formal awareness study

– 20 clips @ levels (0, 3, 4, 5) = 80 video-level-pairs, each watched by 4 participants – 16 participants – Analysis: related samples (0-3, 0-4, 0-5)

  • Wilcoxon tests: no difference for any of the pairs

61

slide-62
SLIDE 62

Additional measurements: Eye Tracking

  • ATTENTION EXPERIMENT
  • Does the SMT successfully direct visual attention

to selected regions?

– 40 participants, between-subjects. Conditions: unmodulated, modulated, – (20 regions × 20 participants) = 400 trials per condition = 800 trials total.

62

slide-63
SLIDE 63

Results

63

slide-64
SLIDE 64

Additional measurements: Eye Tracking

  • ATTENTION EXPERIMENT: HYPOTHESES

Filter: camera panning away from FRs.

  • H1: The time before the first fixation on the FRs will be

smaller for modulated videos than for the original ones.

  • H2: The fixation time in the FRs will be higher for modulated

videos than for the original ones.

  • H3: The percentage of participants with at least one fixation on

the FR will be higher for modulated videos than for the original

  • nes.

64

slide-65
SLIDE 65

Institut für Maschinelles Sehen und Darstellen ‹#›

Eduardo Veas Evaluation-Validation

Additional measurements: Eye Tracking

  • MEMORY EXPERIMENT: METHOD
slide-66
SLIDE 66

Additional measurements: Eye Tracking

  • MEMORY EXPERIMENT: HYPOTHESES

–H4: There is no significant difference in recall hits between the unmodulated and the modulated input for HR

  • bjects.

–H5: There is a significant difference in recall hits between the unmodulated and the modulated input for LR objects.

66

slide-67
SLIDE 67

Research Positions

  • AR projects @ KC.
  • Virtual coffee
  • build IoT coffee machine
  • add sensors and intelligence to appliance
  • Augmened Knowledge Spaces
  • Use space to organize and interact with

technology

  • Investigate novel technologies.

67

slide-68
SLIDE 68

Readings

  • [Irwin] Fixation Location and Fixation Duration as

Indices of Cognitive Processing

  • [Pernice & Nielsen] How to Conduct Eyetracking

Studies

  • Saccades and microsaccades during visual fixation,

exploration, and search: Foundations for a common saccadic generator

  • Tobii eye tracking whitepaper
  • [Veas etal] Directing attention and influencing memory

with saliency modulation

68