Human Visual System Models in Computer Graphics
Tunç O. Aydın
MPI Informatik Computer Graphics Department HDR and Visual Perception Group
Human Visual System Models in Computer Graphics Tun O. Aydn MPI - - PowerPoint PPT Presentation
Human Visual System Models in Computer Graphics Tun O. Aydn MPI Informatik Computer Graphics Department HDR and Visual Perception Group Outline Reality vs. Perception Why even bother modeling visual perception The Human Visual
MPI Informatik Computer Graphics Department HDR and Visual Perception Group
– Why even bother modeling visual perception
– How the “wetware” affects our perception
– Visual Significance of contrast – Contrast Detection
– Key challenges
High Low
Difference Image (Color coded)
Reference (bmp, 616K) Compressed (jpg, 48K)
No one-to-one correspondence between visual perception and reality ! “Perceived Visual Data” instead of luminance or arbitrary pixel values
– Micro-electrode – Radioactive Marker – Vivisection – Psychophysical Experimentation
Video Courtesy of Tobias Ritschel
– Point Spread Function in spatial domain – Optical Transfer Function in Fourier Domain [Deeley et al. 1991]
Spatial Frequency [cy/deg] Modulation
Time
Adaptation Level:
10-4 cd/m2
Adaptation Level:
17 cd/m2
Response [JND] Luminance [cd/m2]
CSF(spatial frequency, adaptation level, temporal freq., viewing dist, …)
Contrast Spatial Frequency
November 6, 2011
Cortex Transform
Loss of sensitivity to a signal in the presence of a “similar frequency” signal “nearby”.
C’: Normalized Contrast
HDR LDR Tone Mapping Compression
Rate the Quality
Quality Assessment Panorama Stitching
k N n k ref k tst
R R R
/ 1 1
ˆ
Log Threshold Elevation
Log Contrast Contrast Difference
Probability of Detection
N n n
P P
1
1 1 ˆ
magnitude of the HVS model’s response as the measure of edge strength, instead of gradient magnitude.
improvement in application results, especially for HDR images
improvements observed in LDR retargeting and panorama stitching.
[Aydın, Čadík, Myszkowski, Seidel. 2010 ACM TAP]
Predictor [Daly’93]
sinusoidal stimuli, not for edges.
measuring edge thresholds
Calibration function: R: Visual Significance for sinusoidal stimulus, R’: Visual Significance for edges. Subjective Measurements Polynomial Fit Metric Predictions Polynomial Fit Ideal Metric Response Calibrated Metric Response
High Low
steady-state HVS models with temporal adaptation model
class estimator integrated into a software that simulates illumination inside an automobile.
[Aydın, Myszkowski, Seidel. 2009 EuroGraphics]
L
Background Luminance
(cvi): function assumes perfect adaptation
and adaptation (cvia) accounts for maladaptation
L + ∆L
Threshold Luminance
C L cvi :
C L L cvia
a
, :
Low High
Visual Significance t = 0 t = 0.2s t = 0.4s t = 0.8s
[Pająk, Čadík, Aydın, Myszkowski, Seidel. 2010 Electronic Imaging] Dark Adaptation Bright Adaptation
transformation from Luminance to pixel values, such that:
– An increment of 1 pixel value corresponds to 1 JND Luminance in both HDR and LDR domains. – The pixel values in LDR domain should be close to sRGB pixel values
Quality metrics (SSIM, PSNR) extended to HDR through the PU space transformation
[Aydın, Mantiuk, Seidel. 2008 Electronic Imaging]
for i = 2 to N Li = Li-1 + tvi(Li-1); end for
subject sensitivity to sRGB within CRT luminance range
traditional contrast difference, use distortion measures agnostic to dynamic range difference.
meaningfully compare an LDR test image with an HDR reference image, and vice versa. Enables
tone mapping operators.
[Aydın, Mantiuk, Myszkowski, Seidel. 2008 SIGGRAPH]
5x LDR HDR Luminance Luminance LDR HDR
HDR Reference LDR Test HDR-VDP 95% 75% 50% 25% Detection Probability Contrast Loss Local Gaussian Blur
Reference Test Contrast Loss Reference Test Contrast Amplification Reference Test Contrast Reversal
Tone Mapping Inverse Tone Mapping
DRIVQM [Aydin et al. 2010] DRIVDP [Aydin et al. 2008] (frame-by-frame) HDR Video
(tone mapped for presentation)
Dynamic Range Independent pipeline with temporal aspects to evaluate video sequences.
VQM that evaluates rendering quality, temporal tone mapping and HDR compression.
[Aydın, Čadík, Myszkowski, Seidel. 2010 SIGGRAPH Asia]
– ω: temporal frequency, – ρ: spatial frequency, – La: adaptation level, – S: sensitivity.
– ω: temporal frequency, – ρ: spatial frequency, – La: adaptation level, – S: sensitivity.
Spatio-temporal CSFT
– ω: temporal frequency, – ρ: spatial frequency, – La: adaptation level, – S: sensitivity.
Steady-state CSFS
CSF(ω,ρ,La = L) CSFT(ω,ρ,La = 100 cd/m2) f(ρ,La)
CSFS(ρ,La) CSFS(ρ,100 cd/m2)
La = 100 cd/m2
CSFT(ω,ρ)
Sustained and Transient Temporal Channels [Winkler 2005] Spatial
No temporal filtering With temporal filtering [Herzog et al. 2010] Predicted distortion map
High quality Low quality Predicted distortion map
Medium Compression High Compression
(1) Show videos side-by-side
(2) Subjects mark regions where they detect differences [Čadík, Aydın, Myszkowski, Seidel. 2011 Electronic Imaging]
Stimulus DRIVQM PDM HDRVDP DRIVDP 1
0.765
0.591 0.488
2
0.883 0.686 0.673 0.859
3
0.843 0.886 0.0769 0.865
4
0.815 0.0205 0.211
5
0.844 0.565 0.803 0.689
6
0.761
0.709 0.299
7
0.879 0.155 0.882 0.924
8
0.733 0.109 0.339 0.393
9
0.753 0.368 0.473 0.617
Average 0.809 0.257 0.528 0.563
instead of “physical” visual data.
– Visual Attention – Prior Knowledge – Gestalt Properties – Free will – …
– Karol Myszkowski, Hans Peter Seidel
– Martin Čadík, Rafał Mantiuk, Dawid Pająk, Makoto Okabe
– Current and past
– Sabine Budde, Ellen Fries, Conny Liegl, Svetlana Borodina, Sonja Lienard.
– Phillip Slusallek, Jan Kautz, Thorsten Thormählen.
– Süheyla and Vahit Aydın, Irem Dumlupınar
Tunç O. Aydın <tunc@mpii.de>