Human Visual System Models in Computer Graphics Tun O. Aydn MPI - - PowerPoint PPT Presentation

human visual system models in
SMART_READER_LITE
LIVE PREVIEW

Human Visual System Models in Computer Graphics Tun O. Aydn MPI - - PowerPoint PPT Presentation

Human Visual System Models in Computer Graphics Tun O. Aydn MPI Informatik Computer Graphics Department HDR and Visual Perception Group Outline Reality vs. Perception Why even bother modeling visual perception The Human Visual


slide-1
SLIDE 1

Human Visual System Models in Computer Graphics

Tunç O. Aydın

MPI Informatik Computer Graphics Department HDR and Visual Perception Group

slide-2
SLIDE 2

Outline

  • Reality vs. Perception

– Why even bother modeling visual perception

  • The Human Visual System (HVS)

– How the “wetware” affects our perception

  • HVS models in Computer Graphics

– Visual Significance of contrast – Contrast Detection

  • Our contributions

– Key challenges

slide-3
SLIDE 3

High Low

Difference Image (Color coded)

Invisible Bits & Bytes

Reference (bmp, 616K) Compressed (jpg, 48K)

slide-4
SLIDE 4

Variations of Perception

No one-to-one correspondence between visual perception and reality ! “Perceived Visual Data” instead of luminance or arbitrary pixel values

slide-5
SLIDE 5

The Human Visual System (HVS)

  • Experimental

Methods of Vision Science

– Micro-electrode – Radioactive Marker – Vivisection – Psychophysical Experimentation

slide-6
SLIDE 6

HVS effects (1): Glare

Video Courtesy of Tobias Ritschel

  • Disability Glare

(blooming)

slide-7
SLIDE 7

Disability Glare

  • Model of Light

Scattering

– Point Spread Function in spatial domain – Optical Transfer Function in Fourier Domain [Deeley et al. 1991]

Spatial Frequency [cy/deg] Modulation

slide-8
SLIDE 8

(2): Light Adaptation

Time

Adaptation Level:

10-4 cd/m2

Adaptation Level:

17 cd/m2

slide-9
SLIDE 9

Perceptually Uniform Space

  • Transfer function:

Maps Luminance to Just Noticeable Differences (JNDs) in

  • Luminance. [Mantiuk

et al. 2004, Aydın et

  • al. 2008]

Response [JND] Luminance [cd/m2]

slide-10
SLIDE 10

(3): Contrast Sensitivity

CSF(spatial frequency, adaptation level, temporal freq., viewing dist, …)

Contrast Spatial Frequency

slide-11
SLIDE 11

November 6, 2011

  • Steady-state CSFS:

Returns the Sensitivity (1/Threshold contrast), given the adaptation luminance and spatial frequency [Daly 1993].

Contrast Sensitivity Function (CSF)

slide-12
SLIDE 12

(4): Visual Channels

Cortex Transform

slide-13
SLIDE 13

(5): Visual Masking

Loss of sensitivity to a signal in the presence of a “similar frequency” signal “nearby”.

slide-14
SLIDE 14

Modeling Visual Masking

  • Example: JPEG’s

pointwise extended masking:

C’: Normalized Contrast

slide-15
SLIDE 15

HVS Models in Graphics/Vision

HDR LDR Tone Mapping Compression

Rate the Quality

Quality Assessment Panorama Stitching

slide-16
SLIDE 16

Visual Significance Pipeline

 

k N n k ref k tst

R R R

/ 1 1

ˆ 

 

slide-17
SLIDE 17

Contrast Detection Pipeline

Log Threshold Elevation

Log Contrast Contrast Difference

Probability of Detection

 

  

N n n

P P

1

1 1 ˆ

slide-18
SLIDE 18

CONTRIBUTIONS: VISUAL SIGNIFICANCE

slide-19
SLIDE 19

Visually Significant Edges

  • Key Idea: Use the

magnitude of the HVS model’s response as the measure of edge strength, instead of gradient magnitude.

  • Result (1): Significant

improvement in application results, especially for HDR images

  • Result (2): Only minor

improvements observed in LDR retargeting and panorama stitching.

[Aydın, Čadík, Myszkowski, Seidel. 2010 ACM TAP]

slide-20
SLIDE 20

Calibration Procedure

  • CSF from the Visible Differences

Predictor [Daly’93]

  • JPEG’s pointwise extended masking
  • Calibration: CSF derived for

sinusoidal stimuli, not for edges.

  • Perceptual experiment for

measuring edge thresholds

slide-21
SLIDE 21

Calibration Function

Calibration function: R: Visual Significance for sinusoidal stimulus, R’: Visual Significance for edges. Subjective Measurements Polynomial Fit Metric Predictions Polynomial Fit Ideal Metric Response Calibrated Metric Response

slide-22
SLIDE 22

Image Retargeting

slide-23
SLIDE 23

Visual Significance Maps

High Low

slide-24
SLIDE 24

Display Visibility under Dynamically Changing Illumination

  • Key Idea: Extending

steady-state HVS models with temporal adaptation model

  • Result: A visibility

class estimator integrated into a software that simulates illumination inside an automobile.

[Aydın, Myszkowski, Seidel. 2009 EuroGraphics]

slide-25
SLIDE 25

L

Background Luminance

cvi for Steady-State Adaptation

  • Contrast vs. Intensity

(cvi): function assumes perfect adaptation

  • Contrast vs. Intensity

and adaptation (cvia) accounts for maladaptation

L + ∆L

Threshold Luminance

C L cvi   :

C L L cvia

a

  , :

slide-26
SLIDE 26

cvia for Maladaptation

cvia cvi

slide-27
SLIDE 27

Adaptation over time

Low High

Visual Significance t = 0 t = 0.2s t = 0.4s t = 0.8s

slide-28
SLIDE 28

Rendering Adaptation

[Pająk, Čadík, Aydın, Myszkowski, Seidel. 2010 Electronic Imaging] Dark Adaptation Bright Adaptation

slide-29
SLIDE 29

CONTRIBUTIONS: CONTRAST DETECTION

slide-30
SLIDE 30

Quality Assessment (IQA, VQA)

Rate the Quality

+ Reliable - High cost

slide-31
SLIDE 31
  • Key Idea: Find a

transformation from Luminance to pixel values, such that:

– An increment of 1 pixel value corresponds to 1 JND Luminance in both HDR and LDR domains. – The pixel values in LDR domain should be close to sRGB pixel values

  • Result: Common LDR

Quality metrics (SSIM, PSNR) extended to HDR through the PU space transformation

Perceptually Uniform Space

[Aydın, Mantiuk, Seidel. 2008 Electronic Imaging]

slide-32
SLIDE 32

Perceptually Uniform Space

  • Derivation:

for i = 2 to N Li = Li-1 + tvi(Li-1); end for

  • Fit the absolute value and

subject sensitivity to sRGB within CRT luminance range

slide-33
SLIDE 33
  • Key Idea: Instead of the

traditional contrast difference, use distortion measures agnostic to dynamic range difference.

  • Result: An IQA that can

meaningfully compare an LDR test image with an HDR reference image, and vice versa. Enables

  • bjective evaluation of

tone mapping operators.

Dynamic Range Independent IQA

[Aydın, Mantiuk, Myszkowski, Seidel. 2008 SIGGRAPH]

slide-34
SLIDE 34

5x LDR HDR Luminance Luminance LDR HDR

HDR vs. LDR

slide-35
SLIDE 35

Problem with Visible Differences

HDR Reference LDR Test HDR-VDP 95% 75% 50% 25% Detection Probability Contrast Loss Local Gaussian Blur

slide-36
SLIDE 36

Distortion Measures

Reference Test Contrast Loss Reference Test Contrast Amplification Reference Test Contrast Reversal

slide-37
SLIDE 37

Novel Applications

Tone Mapping Inverse Tone Mapping

slide-38
SLIDE 38

Video Quality Assessment

DRIVQM [Aydin et al. 2010] DRIVDP [Aydin et al. 2008] (frame-by-frame) HDR Video

(tone mapped for presentation)

slide-39
SLIDE 39
  • Key Idea: Extend the

Dynamic Range Independent pipeline with temporal aspects to evaluate video sequences.

  • Result: An objective

VQM that evaluates rendering quality, temporal tone mapping and HDR compression.

Dynamic Range Independent VQA

[Aydın, Čadík, Myszkowski, Seidel. 2010 SIGGRAPH Asia]

slide-40
SLIDE 40
  • CSF: ω,ρ,La → S

– ω: temporal frequency, – ρ: spatial frequency, – La: adaptation level, – S: sensitivity.

Contrast Sensitivity Function

slide-41
SLIDE 41
  • CSF: ω,ρ,La → S

– ω: temporal frequency, – ρ: spatial frequency, – La: adaptation level, – S: sensitivity.

Contrast Sensitivity Function

Spatio-temporal CSFT

slide-42
SLIDE 42
  • CSF: ω,ρ,La → S

– ω: temporal frequency, – ρ: spatial frequency, – La: adaptation level, – S: sensitivity.

Contrast Sensitivity Function

Steady-state CSFS

slide-43
SLIDE 43

( )

Contrast Sensitivity Function

=

CSF(ω,ρ,La = L) CSFT(ω,ρ,La = 100 cd/m2) f(ρ,La)

x ÷ f =

CSFS(ρ,La) CSFS(ρ,100 cd/m2)

La = 100 cd/m2

CSFT(ω,ρ)

slide-44
SLIDE 44

Extended Cortex Transform

Sustained and Transient Temporal Channels [Winkler 2005] Spatial

slide-45
SLIDE 45

Evaluation of Rendering Methods

No temporal filtering With temporal filtering [Herzog et al. 2010] Predicted distortion map

slide-46
SLIDE 46

Evaluation of Rendering Qualities

High quality Low quality Predicted distortion map

slide-47
SLIDE 47

Evaluation of HDR Compression

Medium Compression High Compression

slide-48
SLIDE 48

Validation Study

  • Noise, HDR video compression, tone mapping
  • “2.5D videos”
  • HDR-HDR, HDR-LDR, LDR-LDR
slide-49
SLIDE 49

Psychophysical Validation

(1) Show videos side-by-side

  • n a HDR Display

(2) Subjects mark regions where they detect differences [Čadík, Aydın, Myszkowski, Seidel. 2011 Electronic Imaging]

slide-50
SLIDE 50

Validation Study Results

Stimulus DRIVQM PDM HDRVDP DRIVDP 1

0.765

  • 0.0147

0.591 0.488

2

0.883 0.686 0.673 0.859

3

0.843 0.886 0.0769 0.865

4

0.815 0.0205 0.211

  • 0.0654

5

0.844 0.565 0.803 0.689

6

0.761

  • 0.462

0.709 0.299

7

0.879 0.155 0.882 0.924

8

0.733 0.109 0.339 0.393

9

0.753 0.368 0.473 0.617

Average 0.809 0.257 0.528 0.563

slide-51
SLIDE 51

Conclusion

  • Starting Intuition: Working on “perceived” visual data,

instead of “physical” visual data.

slide-52
SLIDE 52

Limitations and Future Work

  • What about the rest of

the brain?

– Visual Attention – Prior Knowledge – Gestalt Properties – Free will – …

  • User interaction?
  • Depth perception
slide-53
SLIDE 53

Acknowledgements

  • Advisors

– Karol Myszkowski, Hans Peter Seidel

  • Collaborators

– Martin Čadík, Rafał Mantiuk, Dawid Pająk, Makoto Okabe

  • AG4 Members

– Current and past

  • AG4 Staff

– Sabine Budde, Ellen Fries, Conny Liegl, Svetlana Borodina, Sonja Lienard.

  • Thesis Committee

– Phillip Slusallek, Jan Kautz, Thorsten Thormählen.

  • Family

– Süheyla and Vahit Aydın, Irem Dumlupınar

slide-54
SLIDE 54

THANK YOU.

Tunç O. Aydın <tunc@mpii.de>