Cognition in Visual Processing 707.031: Evaluation Methodology - - PowerPoint PPT Presentation
Cognition in Visual Processing 707.031: Evaluation Methodology - - PowerPoint PPT Presentation
Cognition in Visual Processing 707.031: Evaluation Methodology Winter 2015/16 Eduardo Veas Research Projects @ KTI email Eduardo Connected world build connected coffee machine build sensing and intelligence into appliances
Research Projects @ KTI email Eduardo
- Connected world
- build connected coffee machine
- build sensing and intelligence into appliances
- Augmented Data
- how can we augment the real world with data?
- investigate different display devices
- investigate different visual techniques
- Augmented Knowledge Spaces
- Use space to organize and interact with technology
- Use natural mobility to interact with augmentations
2
Project Topics
3
- Glove Study: Granit, Arbenore ,Santokh, Millot
- AR Signs Study: Eduardo, Santokh, Rene, Millot
- Collection Study: Cecilia,
- VisRec Study: Belgin, Millot, Santokh
- AR navigation study:
Model Human Processor
4
Source: Card et al 1983
5
Perception vs. Cognition
Perception
- Eye, optical nerve,
visual cortex
- Basic perception
- First processing
- (edges, planes)
- Not conscious
- Reflexes
Cognition
- Recognizing objects
- Relations between
- bjects
- Conclusion drawing
- Problem solving
- Learning
6
Shannon’s Information Theorem for Vis
Visual Hypotheses
- Grouping
– What perceptual factors help me group or categorize visual stimuli?
7
- Separability vs
Integrality
Gestalt Laws
- Understand Pattern Perception
- Westheimer, Koffka, Kohler 1912 Gestalt School of Psychology
- Reasons they gave wrong,
- OBSERVATIONS CORRECT
8
- Proximity
- Similarity
- Connectedness
- Continuity
- Symmetry
- Closure
- Relative Size
- Common Fate
Gestalt Laws
The whole is greater than the sum of the parts!
9
10
Proximity
11
Proximity Example
12
Similarity
13
Connectedness
14
Continuity
15
Symmetry
16
Closure
17
Closure Example
18
Relative Size
Same size
19
Figure & Ground
20
Common Fate
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Data Attributes (scales, levels)
- Qualitative
- Nominal / Categorical
- Categories with no ordering (gender, nationality, blood type, etc).
- Ordinal
- Ordered categories (grades, ranks, etc).
- Quantitative
- Interval
- distance between entities matters, but not ratios, arbitrary 0
- temperature in Celsius or Fahrenheit, time, etc.
- Ratio
- Meaningful 0 (weight, age, length, temperature in Kelvin, etc).
- Absolute
- Count (number of students, number of lines of code etc).
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
VISUAL VARIABLES
Visual Variables:”Based on Bertin‘s Visual Variables, 1967, S. Carpendale's discussion, and Munzner categorization of marks and channels.
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Representation
- How data is understood depends on it‘s
representation
- Different representations have different benefits or
deficiencies
- Example: number thirty-four
– Decimal: 34 – Roman: XXXIV – 100010
- Degree of accessibility varies abstraction required
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Visual Encoding
Propose a combination of visual variables (marks and channels) to describe data dimensions
Visual Variables
25
26
Characteristics of Visual Variables
- Selective
– Is a mark distinct from other marks?
- Associative
– Marks with associative visual variables can be perceived as a group.
- Quantitative
– Relationship between two marks can be seen as numerical.
- Order
– A change in an ordered visual variable will be perceived as more or less.
- Length
– The number of changes that can be used (often perception influenced, jnd)
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Position
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Size
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Shape
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Select a shape?
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Value
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Select Value!
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Color
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Select Color!
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Orientation
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
Effectiveness of visual variables (Mackinlay 88)
[Schumann 2000 nach Macinlay 1986]
Decreasing
Perception
Depth perception
37
38
Today‘s Agenda
- Visual Variables
– Data Scales – Data Representation – Bertin‘s Visual Variables
- Depth Perception
– Occulomotor cues – Pictorial cues
39
Visual Perception Theories
- Gregory (1970)
- Top-down
- Perception is
constructive
- Perceptual hypotheses
- 90% sensory
information is lost in the visual pathway
- Gibson (1966)
- Bottom-up
- Direct perception
- Sensory information
analyzed in a pipeline
- No need for previous
knowledge
Bottom up Perception: components I
- Light and environment:
- ptic flow.
40
Bottom up perception components II
- Invariants and texture
41
Bottom up Perception: components III
- affordances
42
- Optical array
- Relative brightness
- Texture gradient
- Relative size
- Superimposition
- Height in the visual
field.
43
Depth Perception: Cues
- Occulomotor
– Convergence – Accomodation
44
Depth Perception: Cues
- Binocular
– Stereopsis/binocular disparity
45
Depth Perception: Cues
- Motion Based (video)
– Motion parallax – Kinetic depth field
b
a
c
46
Structure from motion
- Kinetic Depth Effect
- Assumption of rigidity allows us to
assume shape as objects move/rotate
a
c
47
PICTORIAL DEPTH CUES
48
Perspective cues
- Parallel lines converge
- Distant objects appear smaller
- Textured Elements become smaller with
distance
49
Perspective and vanishing points
50
Occlusion
- The strongest depth cue
- Stereopsis occlusion different for each
eye
51
Depth of focus
- Strong Depth Cue
- Must be coupled with user input (e.g.
point of fixation)
52
Shadows
- Important cue for height of an object
above a plane
- An indirect depth cue
53
Atmospheric depth
- Reduction in contrast of distant objects
- Exaggerated in 3D displays using what
is called proximity luminance covariance.
54
Depth Perception: Pictorial Cues
55
Depth Perception: Cues Summary
56
Illustration techniques
Emphasizing height, width or volume
57
Visualization Tasks
- Judging 3D surfaces
- Relative Position in 3D space
- Finding 3D patterns of points
- Judging relative movement of particles
- Judging self movement
- Feeling a “sense of presence”
58
Depth Perception: Volume rendering
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
59
Livingston, Ai, Swan, Smallman
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
Goals
- test depth perception
indoors and outdoors
- test AR based linear
perspective
60
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
Task
- eight real referents
identified by color
- colored target appeared
at a random depth
- participant moved target
until it matches corresponding referent
61
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
Independent Variables
- Environment {indoor,
- utdoor}
- Tramlines {on, off}
- Grid points {on, off}
- Distance {3.83, 9.66…}
- Repetition {1,2,3,4,5}
62
Dependent Variables
- Distance off
- Time to complete
- Nasa TLX
- Subjective questions
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
Hypotheses
- outdoor environment results in greater error
- tramlines and gridpoints increase precision
- increasing errors and decreasing precision with distance
63
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
Results: Effect of environment on normalized error.
64
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
Results: Effect of repetition on normalized error.
65
Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
- No effect of Grid or Tramlines
- No difference in NASA TLX measures.
66
SITUATION AWARENESS AND MENTAL LOAD
67
Situational awareness
68
SITUATION AWARENESS Projection of future status LEVEL 3 Comprehension
- f current
situation LEVEL 2 Perception of elements in the environment LEVEL 1
Situation awareness
- Often reduced to spatial
awareness
- Where are objects in the
environment?
- Where are they moving?
- Where will they be if
everything continues as is?
- But also with decision
making
69
- Strict form of Cognitive Walkthrough
- Tasks-> Goals are analysed and information for
decision is extracted/structured
70
Situation(al) awareness
71
Human performance
- Situation Awareness
– Perception of current status of the environment. Projection of future status. (mostly real-time interactive systems)
- Workload (cognitive load)
– energetic construct representing the supply and demand of attentional and processing resources – allocated to maintain situation awareness
72
Mental load
- Also cognitive load, workload
- It assumes a limited working memory connected
to an unlimited long-term memory. Working memory retrieves, processes and integrates data.
73
Additional Efficiency Measures: Mental load
…to evaluate effort associated with the use of an interface. (used in HCI mostly)
74
Additional Efficiency Measures: Mental load
- Subjective Measurements
– Primary task performance: – Secondary task performance – Subjective measurements:
- NASA (R)TLX, USAF SWAT
- Physiological measurements:
- Heart rate, pupil dilation, eye movement, brain activity
- [Vidulich and Tsang, Mental Workload and Situation Awareness. In
Handbook of Human Factors and Ergonomics]
75
NASA TLX
- Self assessment of performance
- Must be delivered immediately after finishing
task
- 6 dimensions (mental demand, temporal
demand, physical demand, performance, effort, frustration)
- TLX: workload is the weighted sum of scores
- R-TLX: workload is the average of scores
76
Institut für Maschinelles Sehen und Darstellen ‹#›
Eduardo Veas Evaluation-Validation
BCI
Feature Integration Theory III: Mental load
78
BCI
- takes a biosignal
measured from a person
- predicts some aspect of
person cognitive state
- in real-time / on a single
trial basis
79
BCI Types
- Active: derives its outputs from brain activity that
the user directly controls, independent of external events, to control sth.
- Reactive: derives output from brain activity
arising in reaction to external stimuli, for controlling sth.
- Passive: derives output from arbitrary brain
activity without the purpose of voluntary control
80
BCI: Applications
- Severe Dissability:
tetraplegia, locked-in syndrome
- speller programs
- domotic interfaces
81
BCI: Applications II
- Operator Monitoring:
intent detection
- Brake Intent
- Lane change intent
82
BCI Applications III: Workload
- Monitoring workload,
alertness, fatigue in highly demanding tasks (air traffic controller, pilots)
- Passive BCI
83
BCI in Evaluations [Phase I]:
84
- 1. Learn about BCI knowledge, find relevant previous
BCI studies
- 2. Research design: establish research question and
analysis metrics
- 3. Pilot: [go to next page] and return to 1 if needed
BCI in Evaluations [Phase II]
85
- Prepare your data
- Test hardware
- Prepare room and setup
- Pilot [return to draft?]
- Execute
- Calibration
- Training
- Calibration
- Testing
- Analyze
Additional Efficiency Measures: Mental load
A user study of visualization effectiveness using EEG and cognitive load
86
Resting Period Blank 2 sec stimulus Blank 2 sec stimulus Repeat 98 times End rest marker Start trial marker Start trial marker EEG @ 128Hz baseline Intertrial baseline Trial S-Transform S-Transform Alpha (7.5-12.5 hz) Theta (4 -7.5 hz)
Measuring Mental Load
- EEG based:
– Difficult to come up with study – Difficult to isolate causes of cognitive load – Objectively evaluate interpretation of visualizations?
- Subjective approach
– Include conscious reasoning in the loop – Depends on subjective opinion
87
Readings
- Tobii eye tracking whitepaper
- Nasa TLX
- Brain–Computer Interfaces: A Gentle Introduction
- [Endsley] Direct measurement of situation awareness:
validity and use of SAGAT
- [Borghini etal] Measuring neurophysiological signals in
aircraft pilots and car drivers for the assessment of mental workload, fatigue and drowsiness
- [Veas etal] Directing attention and influencing memory
with saliency modulation
88
89
Recommended reads
- Brain–Computer Interfaces: A Gentle Introduction
- [Endsley] Direct measurement of situation awareness: validity and use of
SAGAT
- [Borghini etal] Measuring neurophysiological signals in aircraft pilots and car
drivers for the assessment of mental workload, fatigue and drowsiness
- NASA TLX
- Indoor vs. Outdoor Depth Perception for Mobile Augmented Reality
[Livingston et al]
- Depth perception in media design: from sensory psychology cues to
interactive tools [Oliver2001]
- Pursuit of x-ray vision for augmented reality [Livingston,Dey,Sandor,Thomas]
- Binocular disparity and the perception of depth [Qian1997]
- Perceptually based depth-ordering enhancement for direct volume rendering
[Zheng et al]
Research Projects @ KTI
- Connected world
- build connected coffee machine
- build sensing and intelligence into appliances
- Augmented Data
- how can we augment the real world with data?
- investigate different display devices
- investigate different visual techniques
- Augmented Knowledge Spaces
- Use space to organize and interact with technology
- Use natural mobility to interact with augmentations
90