Lecture 9 Perceptual Image Quality Assessment Lin ZHANG, PhD - - PowerPoint PPT Presentation

lecture 9 perceptual image quality assessment
SMART_READER_LITE
LIVE PREVIEW

Lecture 9 Perceptual Image Quality Assessment Lin ZHANG, PhD - - PowerPoint PPT Presentation

Lecture 9 Perceptual Image Quality Assessment Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016 Lin ZHANG, SSE, 2016 Contents Problem definition Full reference image quality assessment No reference image


slide-1
SLIDE 1

Lin ZHANG, SSE, 2016

Lecture 9 Perceptual Image Quality Assessment

Lin ZHANG, PhD School of Software Engineering Tongji University Fall 2016

slide-2
SLIDE 2

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Summary
slide-3
SLIDE 3

Lin ZHANG, SSE, 2016

Problem Definition

Please rank these images according to their visual quality

(a) (b) (c) (d)

A subjective process Can we have some algorithms to measure the image quality? And the results is highly consistent with the human judgments Our goal in this lecture!

slide-4
SLIDE 4

Lin ZHANG, SSE, 2016

Problem Definition

  • The goal of the IQA research is to develop objective

metrics for measuring image quality and the results should be consistent with the subjective judgments

  • Classification of the IQA problem
  • Full reference IQA (FR‐IQA)
  • The distortion free image is given. Such an image is considered to

have a perfect quality and is called reference image. A set of its distorted versions are also provided. Your task is to devise an algorithm to evaluate the perceptual quality of distorted images

slide-5
SLIDE 5

Lin ZHANG, SSE, 2016

Problem Definition

  • The goal of the IQA research is to develop objective

metrics for measuring image quality and the results should be consistent with the subjective judgments

  • Classification of the IQA problem
  • Reduced reference IQA (RR‐IQA)
  • The distorted image is given; The reference image is not available;

however, partial information of the reference image is known

slide-6
SLIDE 6

Lin ZHANG, SSE, 2016

Problem Definition

  • The goal of the IQA research is to develop objective

metrics for measuring image quality and the results should be consistent with the subjective judgments

  • Classification of the IQA problem
  • Reduced reference IQA (RR‐IQA)
slide-7
SLIDE 7

Lin ZHANG, SSE, 2016

Problem Definition

  • The goal of the IQA research is to develop objective

metrics for measuring image quality and the results should be consistent with the subjective judgments

  • Classification of the IQA problem
  • No reference IQA (NR‐IQA)
  • Only the distorted image is given. Or more accurately in

such a case, we cannot call it as "distorted" image since we do not know the corresponding distortion‐free reference image. You need to design an algorithm to evaluate the quality of the given image

slide-8
SLIDE 8

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • Application scenarios
  • Problem of the classical FR‐IQA metric—MSE
  • Error visibility method
  • Structural Similarity (SSIM)
  • Feature Similarity (FSIM)
  • Performance metrics
  • No reference image quality assessment
  • Summary
slide-9
SLIDE 9

Lin ZHANG, SSE, 2016

Application Scenarios

  • Quantify the performance of de‐noising algorithms

I

simulation

' I

A

I

algoA

B

I

algoB denoising results Which algorithm is better? has better quality than We need to design a metric function f having the following property:

( , ) ( , )

A B

if f I I f I I 

has better quality than ;

A

I

B

I

  • therwise,

B

I

A

I

Such an f is our desired FR‐IQA metric

slide-10
SLIDE 10

Lin ZHANG, SSE, 2016

Application Scenarios

  • Quantify the performance of compression algorithms

I

Which compression algorithm is better?

A

I

algoA

B

I

algoB compression results We also need a FR‐IQA metric

slide-11
SLIDE 11

Lin ZHANG, SSE, 2016

Application Scenarios

  • FR‐IQA metrics usually can be used in the following

applications

  • Measure the performance of some image enhancement or

restoration algorithms, such as algorithms for denoising, deblurring, dehazing, etc

  • Measure the performance of image compression algorithms
  • Used to adjust parameters of some image processing

algorithms

slide-12
SLIDE 12

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • Application scenarios
  • Problem of the classical FR‐IQA metric—MSE
  • Error visibility method
  • Structural Similarity (SSIM)
  • Feature Similarity (FSIM)
  • Performance metrics
  • No reference image quality assessment
  • Summary
slide-13
SLIDE 13

Lin ZHANG, SSE, 2016

Problem of the Classical FR‐IQA Metric—MSE

  • MSE (mean squared error) is a classical metric to

measure the similarity between two image signals

  • MSE is a point‐to‐point based measure
  • Advantages
  • Easy to compute
  • Easy to optimize
  • Clear physical meaning: energy
  • What’s the problem?

image x image y

1/2 2

1

i i i

N       

 x

y

MSE

slide-14
SLIDE 14

Lin ZHANG, SSE, 2016

  • MSE is point‐to‐point and doesn’t care about ordering

MSE = 1600, MSSIM = 0.6373 MSE = 1600, MSSIM = 0.0420 MSE thinks that the similarity between I1 and I2 and the similarity between I3 and I4 are the same; this contradicts with the human intuition

1

I

2

I

reorder

3

I

reorder

4

I

Problem of the Classical FR‐IQA Metric—MSE

slide-15
SLIDE 15

Lin ZHANG, SSE, 2016

1/2 2

1

i i i

x y N       

+ 30 + (rand sign)* 30

MSE = 900 SSIM = 0.9329 MSE = 900 SSIM = 0.2470

Don’t care about the sign

Problem of the Classical FR‐IQA Metric—MSE

slide-16
SLIDE 16

Lin ZHANG, SSE, 2016

Problem of the Classical FR‐IQA Metric—MSE

  • Mean Squared Error

1/2 2

1

i i i

E x y N        

signal samples are independent signal samples highly correlated

  • Natural Images

highly structured

Conflict

slide-17
SLIDE 17

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • Application scenarios
  • Problem of the classical FR‐IQA metric—MSE
  • Error visibility method
  • Structural Similarity (SSIM)
  • Feature Similarity (FSIM)
  • Performance metrics
  • No reference image quality assessment
  • Summary
slide-18
SLIDE 18

Lin ZHANG, SSE, 2016

Error Visibility Method: Idea

  • Representative work
  • Frequency weighting[Mannos & Sakrison ’74]
  • Sarnoff model [Lubin ’93]
  • Visible difference predictor [Daly ’93]
  • Perceptual image distortion [Teo & Heeger ’94]
  • DCT‐based method [Watson ’93]
  • Wavelet‐based method [Safranek ’89, Watson et al. ’97]

distorted signal = reference signal + error signal Quantify error signal perceptually

slide-19
SLIDE 19

Lin ZHANG, SSE, 2016

Error Visibility Method: Framework

  • Goal: simulate relevant early HVS components
  • Structures motivated by physiology
  • Parameters determined by psychophysics
slide-20
SLIDE 20

Lin ZHANG, SSE, 2016

  • Contrast sensitivity function

CSF

10

  • 2

10

  • 1

10

spatial frequency (cycles/degree) n o rm a liz e d s e n s itiv ity

10

  • 1

10 10

1

10

2

In this image, the contrast amplitude depends only on the vertical coordinate, while the spatial frequency depends on the horizontal coordinate. Observe that for medium frequency you need less contrast than for high or low frequency to detect the sinusoidal fluctuation

Error Visibility Method—HVS Properties Modeling

slide-21
SLIDE 21

Lin ZHANG, SSE, 2016

Error Visibility Method—HVS Properties Modeling

  • Masking

highly visible weak masking hardly visible strong masking

slide-22
SLIDE 22

Lin ZHANG, SSE, 2016

Error Visibility Method—Difficulties

  • Natural image complexity problem
  • Based on simple‐pattern psychophysics
  • Quality definition problem
  • Error visibility = quality ?
slide-23
SLIDE 23

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • Application scenarios
  • Problem of the classical FR‐IQA metric—MSE
  • Error visibility method
  • Structural Similarity (SSIM)
  • Feature Similarity (FSIM)
  • Performance metrics
  • No reference image quality assessment
  • Summary
slide-24
SLIDE 24

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)

Purpose of vision: extract structural information Quantify structural distortion

  • Questions:
  • How to define structural/nonstructural distortions?
  • How to separate structural/nonstructural distortions?
slide-25
SLIDE 25

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)

  • What are structural/non‐structural distortions?

non‐structural distortions luminance change contrast change Gamma distortion spatial shift JPEG blocking wavelet ringing blurring noise contamination structural distortions

slide-26
SLIDE 26

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)

  • What are structural/non‐structural distortions?

distorted image

  • riginal

image

slide-27
SLIDE 27

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)

  • What are structural/non‐structural distortions?

structural distortion distorted image

  • riginal

image nonstructural distortion

slide-28
SLIDE 28

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)

  • What are structural/non‐structural distortions?

structural distortion

+

distorted image

  • riginal

image nonstructural distortion

slide-29
SLIDE 29

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)

  • What are structural/non‐structural distortions?

structural distortion

+

distorted image

  • riginal

image

+

nonstructural distortion

slide-30
SLIDE 30

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)—Computation

For two corresponding local patches x and y in two images

Luminance Comparison Contrast Comparison Structure Comparison

Combination

Similarity Measure

( )

x y

 

( , ) l x y ( , ) c x y ( , ) s x y

is the mean intensity of x (y),

( )

x y

 

is the standard deviation of x (y),

xy

is the covariance of x and y, Assume that x and y are vectorized as

 

1 2

, ,...,

N

x x x  x

 

1 2

, ,...,

N

y y y  y

and

1

1

N x i i

x N 

 

1/2 2 1

1

N x i x i

x N  

       

 

1

1

N xy i x i y i

x y N   

  

slide-31
SLIDE 31

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)—Computation

1 2 2 1

2 ( , )

x y x y

C l C         x y

2 2 2 2

2 ( , )

x y x y

C c C         x y

3 3

( , )

x y x y

C s C       x y

, ,

     

1 2 2 2 2 2 1 2

2 2 ( , ) ( , ) ( , ) ( , )

x y xy x y x y

C C SSIM l c s C C                  x y x y x y x y

Then, the structure similarity between x and y are defined as

1 2 3

, , C C C are fixed constants, and usually set

3 2 / 2

C C 

If the image contains M local patches (defined by a sliding window), the

  • verall image quality is

1

1 SSIM ( , )

M i i i

SSIM M

x y

slide-32
SLIDE 32

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)—Computation

[Wang & Bovik, IEEE Signal Proc. Letters, ’02] [Wang et al., IEEE Trans. Image Proc., ’04]

distortion/similarity measure within sliding window

  • riginal

image distorted image quality map pooling quality score

1 2 2 2 2 2 1 2

(2 )(2 ) ( , ) ( )( )

x y xy x y x y

C C SSIM C C               x y

slide-33
SLIDE 33

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)—Computation

  • riginal

image Gaussian noise corrupted image absolute error map SSIM index map

slide-34
SLIDE 34

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)—Computation

JPEG2000 compressed image

  • riginal

image SSIM index map absolute error map

slide-35
SLIDE 35

Lin ZHANG, SSE, 2016

Structural Similarity (SSIM)—Computation

JPEG compressed image

  • riginal

image SSIM index map absolute error map

slide-36
SLIDE 36

Lin ZHANG, SSE, 2016

Comparison between MSE and SSIM

MSE=0, SSIM=1 MSE=309, SSIM=0.928 MSE=309, SSIM=0.987 MSE=309, SSIM=0.580 MSE=309, SSIM=0.641 MSE=309, SSIM=0.730

  • riginal Image
slide-37
SLIDE 37

Lin ZHANG, SSE, 2016

Comparison between MSE and SSIM

reference image initial image converged image (best SSIM) equal-MSE contour converged image (worst SSIM)

slide-38
SLIDE 38

Lin ZHANG, SSE, 2016

Summary about SSIM

  • Structural similarity (SSIM) metric measures the

structure distortions of images

  • In implementation, SSIM measures the similarity of

two local patches from three aspects, luminance, contrast, and structure

  • The quality scores predicted by SSIM is much more

consistent with human judgments than MSE

  • SSIM is now widely used to gauge image processing

algorithms

In the next section, you will encounter an even more powerful IQA metric, FSIM

slide-39
SLIDE 39

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • Application scenarios
  • Problem of the classical FR‐IQA metric—MSE
  • Error visibility method
  • Structural Similarity (SSIM)
  • Feature Similarity (FSIM)
  • Phase congruency
  • Feature similarity index (FSIM)
  • Performance metrics
  • No reference image quality assessment
  • Summary
slide-40
SLIDE 40

Lin ZHANG, SSE, 2016

Phase Congruency

  • Why is phase important?

2 ( )

( ) ( ) ( ) ( )

i ux i u

f x F u f x e dx A u e

  

  

฀

Fourier transform

( ) u 

is called the Fourier phase or the global phase

  • Phase is defined for a specified frequency
  • The Fourier phase indicates the relative position of

the frequency components

  • Phase is a real number between
slide-41
SLIDE 41

Lin ZHANG, SSE, 2016

Phase Congruency

  • Why is phase important?

Fourier Hilbert Reconstruction results

From Fourier’s amplitude From Fourier’s phase From Fourier’s phase + Hilbert’s amplitude

slide-42
SLIDE 42

Lin ZHANG, SSE, 2016

Phase Congruency

  • Local phase analysis

Question: What are the frequency components (and the associated phases) at a certain position in a real signal f(x) ? Fourier transforms cannot answer such questions

slide-43
SLIDE 43

Lin ZHANG, SSE, 2016

Phase Congruency

  • Local phase analysis

( ) ( ) ( )

A H

f x f x if x  

Analytic signal needs to be constructed where

1 ( ) ( )* ( ), ( )

H

f x h x f x h x x   

is called the Hilbert transform of f(x)

( )

H

f x

Instantaneous phase:

 

( ) arctan2 ( ), ( )

H

x f x f x  

Instantaneous amplitude:

2 2

( ) ( ) ( )

H

A x f x f x  

seems local, but not so since HT is a global transform

( ) x 

slide-44
SLIDE 44

Lin ZHANG, SSE, 2016

Phase Congruency

  • Local phase analysis

Thus, local complex filters whose responses are analytic signals themselves are used instead That is If is a complex filter and

( ) ( ) ( )

e

  • g x

g x ig x   ( )* ( ) ( )* ( ) ( )* ( )

e

  • g x

f x g x f x ig x f x  

is an analytic signal, then, the local phase (instead of the instantaneous phase) of f(x) is defined as

 

( ) arctan2 ( )* ( ), ( )* ( )

  • e

x g x f x g x f x  

The local amplitude is

   

2 2

( ) ( )* ( ) ( )* ( )

e

  • A x

g x f x g x f x  

slide-45
SLIDE 45

Lin ZHANG, SSE, 2016

Phase Congruency

  • Local phase analysis

Thus, local complex filters whose responses are analytic signals themselves are used instead That is If is a complex filter and

( ) ( ) ( )

e

  • g x

g x ig x   ( )* ( ) ( )* ( ) ( )* ( )

e

  • g x

f x g x f x ig x f x  

is an analytic signal, and are called a quadrature pair

  • g

e

g What are the commonly used quadrature pair filters? See the next sections!

slide-46
SLIDE 46

Lin ZHANG, SSE, 2016

Phase Congruency

  • Gabor filter

 

'2 '2 ' 2 2

1 ( , ) exp exp 2 2

x y

x y G x y i fx                       

where

' '

cos sin , sin cos x x y y x y         

(1)

slide-47
SLIDE 47

Lin ZHANG, SSE, 2016

Phase Congruency

  • Gabor filter

 

'2 '2 ' 2 2

1 ( , ) exp exp 2 2

x y

x y G x y i fx                       

where

' '

cos sin , sin cos x x y y x y         

(1)

John Daugman, University of Cambridge, UK Denis Gabor, 1900~1979, Nobel Prize Winner

slide-48
SLIDE 48

Lin ZHANG, SSE, 2016

Phase Congruency

  • Gabor filter

 

'2 '2 ' 2 2

1 ( , ) exp exp 2 2

x y

x y G x y i fx                       

where

' '

cos sin , sin cos x x y y x y         

(1)

Primary Cortex

slide-49
SLIDE 49

Lin ZHANG, SSE, 2016

Phase Congruency

  • Gabor filter

 

'2 '2 ' 2 2

1 ( , ) exp exp 2 2

x y

x y G x y i fx                       

where

' '

cos sin , sin cos x x y y x y         

(1)

  • J. G. Daugman, Uncertainty relation for resolution in space, spatial frequency,

and orientation optimized by two‐dimensional visual cortical filters, Journal of the Optical Society of America A, 2(7):1160–1169, 1985.

slide-50
SLIDE 50

Lin ZHANG, SSE, 2016

Phase Congruency

  • Log‐Gabor filter
  • It is also a quadrature pair filter; defined in the frequency

domain

 

 

 

2 2 2 2 2

log / ( , ) exp exp 2 2

j j r

G

                            

where is the orientation angle, is the center frequency, controls the filter’s radial bandwidth, and determines the angular bandwidth /

j

j J   

r

radial part angular part Log‐Gabor

slide-51
SLIDE 51

Lin ZHANG, SSE, 2016

Phase Congruency—Motivation

  • Gradient‐based feature detectors
  • Roberts, Prewitt, Sobel, Canny et al…..
  • Find maximum in the gradient map
  • Sensitive to illumination and contrast variations
  • Poor localization, especially with scale analysis
  • Difficult to use—threshold problem. One does not know

in advance what level of edge strength corresponds to a significant feature

slide-52
SLIDE 52

Lin ZHANG, SSE, 2016

Phase Congruency—Motivation

  • Gradient‐based feature detectors
  • Roberts, Prewitt, Sobel, Canny et al…..
  • Find maximum in the gradient map
  • Sensitive to illumination and contrast variations
  • Poor localization, especially with scale analysis
  • Difficult to use—threshold problem. One does not know

in advance what level of edge strength corresponds to a significant feature

slide-53
SLIDE 53

Lin ZHANG, SSE, 2016

Phase Congruency—Motivation

Harris corners, Harris corners,

1   7  

slide-54
SLIDE 54

Lin ZHANG, SSE, 2016

Phase Congruency—Motivation

  • Phase congruency is proposed to overcome those

drawbacks

  • Totally based on the local phase information
  • A more general framework for feature definition
  • Invariant to contrast and illumination variation
  • Offers the promise of allowing one to specify universal feature

thresholds

slide-55
SLIDE 55

Lin ZHANG, SSE, 2016

Phase Congruency—Definition

  • First appears in [1]
  • It is more like the human visual system
  • It postulates that features are perceived at points of

maximum phase congruency

[1] M.C. Morrone, J. Ross, D.C. Burr, and R. Owens, Mach bands are phase dependent, Nature, vol. 324, pp. 250‐253, 1986

[all the following discussions will be based on this observation]

slide-56
SLIDE 56

Lin ZHANG, SSE, 2016

Phase Congruency—Definition

  • Features from the PC view. Fourier components are all

in phase in the two cases

slide-57
SLIDE 57

Lin ZHANG, SSE, 2016

Phase Congruency—Computation

  • Now the widely used to method to compute phase

congruency is [1]

  • In [1], Kovesi proposed a framework to compute PC by

using quadrature pair filters

[1] P. Kovesi, Image features from phase congruency, Videre: Journal of Computer Vision Research, vol. 1, pp. 1‐26, 1999

slide-58
SLIDE 58

Lin ZHANG, SSE, 2016

Phase Congruency—Computation

denote the even‐symmetric and odd‐ symmetric wavelets at a scale

,

e

  • n

n

M M

n   

( ), ( ) ( )* , ( )*

e

  • n

n n n

e x o x I x M I x M 

( ) ( ), ( ) ( )

n n n n

F x e x H x

  • x

 

 

The amplitude and phase of the transform at a given wavelet scale is given by

2 2

( ) ( ) ( )

n n n

A x e x

  • x

 

and can be estimated as:

( ) F x

( ) H x

( ) ( ) ( )

n n

E x PC x A x   

2 2

( ) ( ) ( ) E x F x H x  

( ) ( ) ( )

n n n

  • x

x arctg e x  

slide-59
SLIDE 59

Lin ZHANG, SSE, 2016

Phase Congruency—Example

slide-60
SLIDE 60

Lin ZHANG, SSE, 2016

Phase Congruency—Example

slide-61
SLIDE 61

Lin ZHANG, SSE, 2016

Phase Congruency—Example

slide-62
SLIDE 62

Lin ZHANG, SSE, 2016

Phase Congruency—Example

slide-63
SLIDE 63

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • Application scenarios
  • Problem of the classical FR‐IQA metric—MSE
  • Error visibility method
  • Structural Similarity (SSIM)
  • Feature Similarity (FSIM)
  • Phase congruency
  • Feature similarity index (FSIM)
  • Performance metrics
  • No reference image quality assessment
  • Summary
slide-64
SLIDE 64

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)

  • A state‐of‐the‐art method proposed in [1]

[1] Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang, FSIM: A feature similarity index for image quality assessment, IEEE Trans. Image Processing, vol. 20, pp. 2378‐2386, 2011

slide-65
SLIDE 65

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)

  • A state‐of‐the‐art method proposed in [1]
  • Motivations
  • Low‐level feature inspired
  • Visual information is often redundant
  • low‐level features convey most crucial information
  • Image degradations will lead to changes in image low‐level

features Thus, an IQA index could be devised by comparing the low‐level features between the reference image and the distorted image

What kinds of features?

slide-66
SLIDE 66

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)

  • Phase congruency
  • Physiological and psychophysical evidences
  • Measure the significance of a local structure
  • Gradient magnitude
  • PC is contrast invariant. However, local contrast indeed will

affect the perceptive image quality

  • Thus, we have to compensate for the contrast
  • Gradient magnitude can be used to measure the contrast

similarity

slide-67
SLIDE 67

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)

  • Phase congruency—An example
slide-68
SLIDE 68

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)

  • Gradient magnitude

1 1 * ( ), 0 * ( ) 16 16 3 3 10 3

x y

G f G f                                  x x

Scharr operator to extract the gradient Gradient magnitude (GM):

2 2 x y

G G G  

slide-69
SLIDE 69

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)

  • FSIM computation

Given two images, f1 and f2 Their PC maps, PC1 and PC2 Their GM maps, G1 and G2 PC similarity

1 2 1 2 2 1 2 1

2 ( ) ( ) ( ) ( ) ( )

PC

PC PC T S PC PC T      x x x x x

GM similarity

1 2 2 2 2 1 2 2

2 ( ) ( ) ( ) ( ) ( )

G

G G T S G G T      x x x x x ( ) ( ) ( ) FSIM ( )

PC G m m

S S PC PC

 

   

x x

x x x x where

 

1 2

( ) max ( ), ( )

m

PC PC PC  x x x T1 is a constant T2 is a constant

slide-70
SLIDE 70

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)

  • Extended to a color IQA

Separate the chrominance from the luminance

0.299 0.587 0.114 0.596 0.274 0.322 0.211 0.523 0.312 Y R I G Q B                                     

I1(I2) and Q1(Q2) be the I and Q channels of f1 and f2

1 2 3 2 2 1 2 3

2 ( ) ( ) ( ) ( ) ( )

I

I I T S I I T      x x x x x

1 2 4 2 2 1 2 4

2 ( ) ( ) ( ) ( ) ( )

Q

Q Q T S Q Q T      x x x x x

( ) ( ) ( ) ( ) ( ) FSIM ( )

PC G I Q m C m

S S S S PC PC

  

         

x x

x x x x x x

slide-71
SLIDE 71

Lin ZHANG, SSE, 2016

Feature Similarity Index (FSIM)—Schematic diagram

slide-72
SLIDE 72

Lin ZHANG, SSE, 2016

Summary

  • FSIM is a HVS‐driven IQA index
  • HVS perceives an image mainly based on its low‐level

features

  • PC and gradient magnitude are used
  • PC is also used to weight the contribution of each point to

the overall similarity of two images

  • FSIM is extended to FSIMC, a color IQA index
  • FSIM (FSIMC) outperforms all the other state‐of‐the‐

art IQA indices evaluated

slide-73
SLIDE 73

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • Application scenarios
  • Problem of the classical FR‐IQA metric—MSE
  • Error visibility method
  • Structural Similarity (SSIM)
  • Feature Similarity (FSIM)
  • Performance metrics
  • No reference image quality assessment
  • Summary
slide-74
SLIDE 74

Lin ZHANG, SSE, 2016

Performance Metrics

  • How to evaluate the performance of IQA indices?
  • Some benchmark datasets were created
  • Reference images (quality distortion free) are provided
  • For each reference image, a set of distorted images are created;

they suffer from kinds of quality distortions, such as Gaussian noise, JPEG compression, blur, etc; let’s suppose that there are altogether N distorted images

  • For each distorted image, there is an associated quality score, given

by subjects; thus, altogether we have N scores

  • For distorted images, we can compute their objective

quality scores by using an IQA index f; we can get N quality scores

  • f’s performance can be reflected by the rank order

correlation coefficients between and

1

{ }N

i i

s

1

{ }N

i i

1

{ }N

i i

s

1

{ }N

i i

slide-75
SLIDE 75

Lin ZHANG, SSE, 2016

Performance Metrics

  • How to evaluate the performance of IQA indices?

Spearman rank order correlation coefficient (SRCC)

2 1 2

6 1 ( 1)

N i i

d SRCC N N

  

where di is the difference between the ith image's ranks in the subjective and objective evaluations. Note: in Matlab, you can compute the SROCC by using srcc = corr(vect1, vect2, 'type', 'spearman')

slide-76
SLIDE 76

Lin ZHANG, SSE, 2016

Performance Metrics

  • How to evaluate the performance of IQA indices?

Kendall rank order correlation coefficient (KRCC)

0.5 ( 1)

c d

n n KRCC N N   

where nc is the number of concordant pairs and nd is the number of discordant pairs Note: in Matlab, you can compute the SROCC by using krcc = corr(vect1, vect2, 'type', ‘kendall')

slide-77
SLIDE 77

Lin ZHANG, SSE, 2016

Performance Metrics

  • Popular used benchmark datasets for evaluating IQA

indices

Database name Reference Images Distorted images Observer numbers Distortion types TID2013 [1] 25 2000 971 24 TID2008 [2] 25 1700 838 17 CSIQ [3] 30 866 35 6 LIVE [4] 29 779 161 5 [1] http://www.ponomarenko.info/tid2013.htm [2] http://www.ponomarenko.info/tid2008.htm [3] http://vision.okstate.edu/?loc=csiq [4] http://live.ece.utexas.edu/research/Quality/

slide-78
SLIDE 78

Lin ZHANG, SSE, 2016

Performance Metrics—Comparison of IQA Indices

FSIM FSIMC MS‐SSIM VIF SSIM IFC VSNR NQM TID 2013 SRCC 0.8015 0.8510 0.7859 0.6769 0.7417 0.5389 0.6812 0.6392 KRCC 0.6289 0.6665 0.6047 0.5147 0.5588 0.3939 0.5084 0.4740 TID 2008 SRCC 0.8805 0.8840 0.8528 0.7496 0.7749 0.5692 0.7046 0.6243 KRCC 0.6946 0.6991 0.6543 0.5863 0.5768 0.4261 0.5340 0.4608 CSIQ SRCC 0.9242 0.9310 0.9138 0.9193 0.8756 0.7482 0.8106 0.7402 KRCC 0.7567 0.7690 0.7397 0.7534 0.6907 0.5740 0.6247 0.5638 LIVE SRCC 0.9634 0.9645 0.9445 0.9631 0.9479 0.9234 0.9274 0.9086 KRCC 0.8337 0.8363 0.7922 0.8270 0.7963 0.7540 0.7616 0.7413

Note: For more details about full reference IQA, you can refer to http://sse.tongji.edu.cn/linzhang/IQA/IQA.htm

slide-79
SLIDE 79

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Background introduction
  • Our proposed method: IOUML
  • Summary
slide-80
SLIDE 80

Lin ZHANG, SSE, 2016

Background introduction—Problem definition

  • No reference image quality assessment (NR‐IQA)
  • Devise computational models to estimate the

quality of a given image as perceived by human beings

  • The only information an NR‐IQA algorithm receives

is the image whose quality is being assessed itself

slide-81
SLIDE 81

Lin ZHANG, SSE, 2016

Background introduction—Problem definition

  • No reference image quality assessment (NR‐IQA)

How do you think the quality of these two images? Though you are not provided the ground‐truth reference images, you may judge the quality of these two images as poor

slide-82
SLIDE 82

Lin ZHANG, SSE, 2016

Background introduction—Problem definition

  • No reference image quality assessment (NR‐IQA)

How do you think about the qualities of these images? Rank them Remember that you DONOT know the ground‐truth “high quality” reference image

slide-83
SLIDE 83

Lin ZHANG, SSE, 2016

Background introduction—Typical methods

  • Opinion‐aware approaches
  • These approaches require a dataset comprising

distorted images and associated subjective scores

  • At the training stage, feature vectors are extracted from

images and then the regression model, mapping the feature vectors to the subjective scores, is learned

  • At the testing stage, a feature vector is extracted from

the test image, and its quality score can be predicted by inputting the feature vector to the learned regression model

slide-84
SLIDE 84

Lin ZHANG, SSE, 2016

Background introduction—Typical methods

  • Opinion‐aware approaches

feature vectors

slide-85
SLIDE 85

Lin ZHANG, SSE, 2016

Background introduction—Typical methods

  • Opinion‐aware approaches
  • BIQI [1]
  • BRISQUE [2]
  • BLIINDS [3]
  • BLIINDS‐II [4]
  • DIIVINE [5]
  • CORNIA [6]
  • LBIQ [7]

Proposed by Bovik’s group, Univ. Texas http://live.ece.utexas.edu/

slide-86
SLIDE 86

Lin ZHANG, SSE, 2016

Background introduction—Typical methods

  • Opinion‐aware approaches

– [1] A. Moorthy and A. Bovik, A two‐step framework for constructing blind image quality indices, IEEE Sig. Process. Letters, 17: 513‐516, 2010 – [2] A. Mittal, A.K. Moorthy, and A.C. Bovik, No‐reference image quality assessment in the spatial domain, IEEE Trans. Image Process., 21: 4695‐4708, 2012 – [3] M.A. Sadd, A.C. Bovik, and C. Charrier, A DCT statistics‐based blind image quality index, IEEE Sig. Process. Letters, 17: 583‐586, 2010 – [4] M.A. Sadd, A.C. Bovik, and C. Charrier, Blind image quality assessment: A natural scene statistics approach in the DCT domain, IEEE Trans. Image Process., 21: 3339‐3352, 2012 – [5] A.K. Moorthy and A.C. Bovik, Blind image quality assessment: from natural scene statistics to perceptual quality, IEEE Trans. Image Process., 20: 3350‐3364, 2011 – [6] P. Ye, J. Kumar, L. Kang, and D. Doermann, Unsupervised feature learning framework for no‐reference image quality assessment, CVPR, 2012 – [7] H. Tang, N. Joshi, and A. Kapoor. Learning a blind measure of perceptual image quality, CVPR, 2011

slide-87
SLIDE 87

Lin ZHANG, SSE, 2016

Background introduction—Typical methods

  • Opinion‐unaware approaches
  • These approaches DONOT require a dataset comprising

distorted images and associated subjective scores

  • A typical method is NIQE [1]
  • Offline learning stage: constructing a collection of

quality‐aware features from pristine images and fitting them to a multivariate Gaussian (MVG) model 

  • Testing stage: the quality of a test image is expressed as

the distance between a MVG fit of its features and 

[1] A. Mittal et al. Making a “completely blind” image quality analyzer. IEEE Signal Process. Letters, 20(3): 209-212, 2013.

slide-88
SLIDE 88

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Background introduction
  • Our proposed method: IL‐NIQE
  • Motivations and our contributions
  • NIS‐induced quality‐aware features
  • Pristine model learning
  • IL‐NIQE index
  • Experimental results
  • Summary
slide-89
SLIDE 89

Lin ZHANG, SSE, 2016

Motivations[1]

  • Opinion‐unaware approaches seems appealing, so

we want to propose an opinion‐unaware approach

  • Design rationale
  • Natural images without quality distortions possess

regular statistical properties that can be measurably modified by the presence of distortions

  • Deviations from the regularity of natural statistics,

when quantified appropriately, can be used to assess the perceptual quality of an image

  • NIS‐based features have been proved powerful. Any
  • ther NIS‐based features?

[1] Lin Zhang et al., A feature‐enriched completely blind image quality evaluator, IEEE Trans. Image Processing 24 (8) 2579‐2591, 2015

slide-90
SLIDE 90

Lin ZHANG, SSE, 2016

Contributions

  • A novel “opinion‐unaware” NR‐IQA index, IL‐NIQE

(Integrated Local‐NIQE)

  • A set of prudently designed NIS‐induced quality‐aware

features

  • Bhattacharyya distance based metric to measure the

quality of a local image patch

  • A visual saliency based quality score pooling scheme
  • A thorough evaluation of the performance of modern

NR‐IQA indices

slide-91
SLIDE 91

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Background introduction
  • Our proposed method: IL‐NIQE
  • Motivations and our contributions
  • NIS‐induced quality‐aware features
  • Pristine model learning
  • IL‐NIQE index
  • Experimental results
  • Summary
slide-92
SLIDE 92

Lin ZHANG, SSE, 2016

  • Statistics of normalized luminance
  • The mean subtracted contrast normalized (MSCN)

coefficients have been observed to follow a unit normal distribution when computed from natural images without quality distortions [1]

  • This model, however, is violated when images are subjected

to quality distortions; the degree of violation can be indicative of distortion severity

IL‐NIQE—NIS‐induced quality‐aware features

( , )

n

I x y

[1] D.L. Ruderman. The statistics of natural images. Netw. Comput. Neural Syst., 5(4):517-548, 1994.

slide-93
SLIDE 93

Lin ZHANG, SSE, 2016

  • Statistics of normalized luminance

IL‐NIQE—NIS‐induced quality‐aware features

( , ) ( , ) ( , ) ( , ) 1

n

I x y x y I x y x y     

,

( , ) ( , )

K L k l k K l L

x y I x k y l  

 

  

 

 

2 ,

( , ) ( , ) ( , )

K L k l k K l L

x y I x k y l x y   

 

   

 

where Conforms to Gaussian

slide-94
SLIDE 94

Lin ZHANG, SSE, 2016

  • Statistics of normalized luminance
  • We use a generalized Gaussian distribution (GGD) to

model the distribution of

IL‐NIQE—NIS‐induced quality‐aware features

( , )

n

I x y

 

( ; , ) exp 2 1/ x g x

                     

Density function of GGD, Parameters are used as quality‐aware features which can be estimated from {In(x, y)} by MLE

,  

slide-95
SLIDE 95

Lin ZHANG, SSE, 2016

  • Statistics of MSCN products
  • The distribution of products of pairs of adjacent MSCN

coefficients, In(x, y)In(x, y+1), In(x, y)In(x+1, y), In(x, y)In(x+1, y+1), and In(x, y)In(x+1, y-1), can also capture the quality distortion

IL‐NIQE—NIS‐induced quality‐aware features

slide-96
SLIDE 96

Lin ZHANG, SSE, 2016

  • Statistics of MSCN products
  • They can be modeled by asymmetric generalized Gaussian

distribution (AGGD),

IL‐NIQE—NIS‐induced quality‐aware features

     

 

     

 

exp / , 1/ ( ; , , ) exp / , 1/

l l r l r r l r

x x g x x x

 

                                 The mean of AGGD is

     

2 / / 1/

r l

        

 

, , ,

r l

   

are used as “quality‐aware” features

slide-97
SLIDE 97

Lin ZHANG, SSE, 2016

  • Statistics of partial derivatives and gradient magnitudes
  • We found that when introducing quality distortions to an

image, the distribution of its partial derivatives, and gradient magnitudes, will be changed

IL‐NIQE—NIS‐induced quality‐aware features

slide-98
SLIDE 98

Lin ZHANG, SSE, 2016

  • Statistics of partial derivatives and gradient magnitudes

IL‐NIQE—NIS‐induced quality‐aware features

1(a) 1(b) 1(d) 1(c) 1(e)

slide-99
SLIDE 99

Lin ZHANG, SSE, 2016

  • Statistics of partial derivatives and gradient magnitudes

IL‐NIQE—NIS‐induced quality‐aware features

  • 0.015 -0.01 -0.005

0.005 0.01 0.015 1 2 3 4 5 6 partial derivative (normalized) Percentage (%)

  • Fig. 1(a)
  • Fig. 1(b)
  • Fig. 1(c)
  • Fig. 1(d)
  • Fig. 1(e)

0.005 0.01 0.015 0.5 1 1.5 2 2.5 3 3.5 gradient magnitude (normalized) Percentage (%)

  • Fig. 1(a)
  • Fig. 1(b)
  • Fig. 1(c)
  • Fig. 1(d)
  • Fig. 1(e)
slide-100
SLIDE 100

Lin ZHANG, SSE, 2016

  • Statistics of partial derivatives and gradient magnitudes

IL‐NIQE—NIS‐induced quality‐aware features

Partial derivatives

* ( , ), * ( , )

x x y y

I I G x y I I G x y  

where,

2 2 4 2 2 2 4 2

( , ) exp 2 2 ( , ) exp 2 2

x y

x x y G x y y x y G x y                         Gradient magnitudes

2 2

( , )

x y

GM x y I I  

slide-101
SLIDE 101

Lin ZHANG, SSE, 2016

  • Statistics of partial derivatives and gradient magnitudes
  • We use a GGD to model the distributions of Ix (or Iy) and take

its parameters as features

  • We use a Weibull distribution [1] to model the distribution of

the gradient magnitudes and use the parameters as features,

IL‐NIQE—NIS‐induced quality‐aware features

 

1 exp

, ; , 0,

a a a

a x x x h x a b b b x

                       a and b are used as features

[1] J.M. Geusebroek and A.W.M. Smeulders. A six-stimulus theory for stochastic

  • texture. Int. J. Comp. Vis., 62(1): 7-16, 2005.
slide-102
SLIDE 102

Lin ZHANG, SSE, 2016

  • Statistics of image’s responses to log‐Gabor filters
  • Motivation: neurons in the visual cortex respond selectively to

stimulus’ orientation and frequency, statistics on the images’ multi‐scale multi‐orientation decompositions should be useful for designing a NR‐IQA model

IL‐NIQE—NIS‐induced quality‐aware features

slide-103
SLIDE 103

Lin ZHANG, SSE, 2016

  • Statistics of image’s responses to log‐Gabor filters
  • For multi‐scale multi‐orientation filtering, we adopt the log‐Gabor

filter,

IL‐NIQE—NIS‐induced quality‐aware features

 

 

2 2 2 2

log 2 2 2

,

j r

G e e

     

 

                

 

where is the orientation angle, is the center frequency, controls the filter’s radial bandwidth, and determines the angular bandwidth /

j

j J   

r

radial part angular part Log‐Gabor

slide-104
SLIDE 104

Lin ZHANG, SSE, 2016

  • Statistics of image’s responses to log‐Gabor filters

IL‐NIQE—NIS‐induced quality‐aware features

With log‐Gabor filters having J orientations and N center frequencies, we could get response maps

, ,

{( ( ), ( )) :| 0,..., 1, 0,..., 1}

n j n j

e

  • n

N j J     x x where and represents the image’s response to the real and imaginary part of the log‐Gabor filter

, ( ) n j

e x

, ( ) n j

  • x

We extract the quality‐aware features as

a) Use a GGD model to fit the distribution of {en,j(x)} (or {on,j(x)}) and take the model parameters α and β as features. b) use a GGD to model the distribution of partial derivatives of {en,j(x)} (or {on,j(x)}) and also take the two model parameters as features. c) Use a Weibull model to fit the distribution of gradient magnitudes of {en,j(x)} (or {on,j(x)}) and take the corresponding parameters a and b as features

slide-105
SLIDE 105

Lin ZHANG, SSE, 2016

  • Statistics of colors
  • Ruderman et al. showed that in a logrithmic‐scale opponent

color space, the distributions of the image data conform well to Gaussian [1]

IL‐NIQE—NIS‐induced quality‐aware features

[1] D.L. Ruderman et al. Statistics of cone response to natural images: implications for visual coding. J. Opt. Soc. Am. A, 15(8): 2036-2045, 1998.

slide-106
SLIDE 106

Lin ZHANG, SSE, 2016

  • Statistics of colors

IL‐NIQE—NIS‐induced quality‐aware features

RGB to logarithmic signal with mean subtracted,

( , ) log ( , ) log ( , ) ( , ) log ( , ) log ( , ) ( , ) log ( , ) log ( , ) x y R x y R x y x y G x y G x y x y B x y B x y              ฀ ฀ ฀

where <logX(x,y)> means the mean of logX(x,y)> to opponent color space

1 2 3

( , ) ( ) / 3 ( , ) ( 2 ) / 6 ( , ) ( ) / 2 l x y l x y l x y         ฀       

For natural images, l1, l2, and l3 conform well to Gaussian

slide-107
SLIDE 107

Lin ZHANG, SSE, 2016

  • Statistics of colors

IL‐NIQE—NIS‐induced quality‐aware features

.2

.1 .1 .2 1 2 3 4 5 6 7 8 P e rce n ta g e (% ) F ig . 3 (a ) F ig . 3 (b ) F ig . 3 (c)

l1 coefficients

.2

.1 .1 .2 2 4 6 8 1 1 2 P e rce n ta g e (% ) F ig . 3 (a ) F ig . 3 (b ) F ig . 3 (c)

l2 coefficients

.2

.1 .1 .2 2 4 6 8 1 1 2 1 4 1 6 1 8 2 P e rce n ta g e (% ) F ig . 3 (a ) F ig . 3 (b ) F ig . 3 (c)

l3 coefficients

3(a) 3(b) 3(c)

slide-108
SLIDE 108

Lin ZHANG, SSE, 2016

  • Statistics of colors

IL‐NIQE—NIS‐induced quality‐aware features

We use Gaussian to fit the distribution of l1, l2, and l3,

2 2 2

1 ( ) ( ; , ) exp 2 2 x f x                For each l1, l2, and l3 channel, we estimate the two parameters ζ and ρ2 and take them as quality‐aware features

slide-109
SLIDE 109

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Background introduction
  • Our proposed method: IL‐NIQE
  • Motivations and our contributions
  • NIS‐induced quality‐aware features
  • Pristine model learning
  • IL‐NIQE index
  • Experimental results
  • Summary
slide-110
SLIDE 110

Lin ZHANG, SSE, 2016

  • The pristine model acts as a “standard” for

representing characteristics of high quality images

  • It is learned from a pristine image set collected by us,

which contains 92 high quality images

Pristine model learning

Sample high quality images

slide-111
SLIDE 111

Lin ZHANG, SSE, 2016

  • Step 1: for each pristine image, it is partitioned into

patches

  • Step 2: high contrast patches are selected based on local

variance field

  • Step 3: for each selected patch, the quality‐aware features

are extracted. Thus, we can get a feature vector set,

Pristine model learning

P P 

1

{ :| 1,..., },

d i i

i M

  x x ฀

where M is the number of patches and d is the feature dimension d is very large, so we need a further dimension reduction operation

slide-112
SLIDE 112

Lin ZHANG, SSE, 2016

  • Step 4: dimension reduction by PCA

Pristine model learning

Suppose is the dimension reduction matrix

,

d m m

d

   ฀

After the dimension reduction,

1 d i 

 x ฀

' 1 T m i i 

   x x ฀

  • Step 5: feed into a MVG model and regard it as the

pristine model

' 1

{ }M

i i

x

     

1 /2 1/2

1 1 ( ) exp 2 2

T m

f 

            x x v x v

where v is the mean vector and is the covariance matrix

The mean vector and the covariance matrix of the pristine model are denoted as v1 and

1

slide-113
SLIDE 113

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Background introduction
  • Our proposed method: IL‐NIQE
  • Motivations and our contributions
  • NIS‐induced quality‐aware features
  • Pristine model learning
  • IL‐NIQE index
  • Experimental results
  • Summary
slide-114
SLIDE 114

Lin ZHANG, SSE, 2016

  • Step 1: partition the test image into patches
  • Step 2: for each patch, we extract from it a feature vector;

thus, we can get a feature vector set,

IL‐NIQE index

P P 

1

{ :| 1,..., },

d i t i

i M

  y y ฀

where Mt denotes the number of patches extracted from test image

  • Step 3: reduce the dimension of yi as

' ' 1

,

T m i i i 

   y y y ฀

  • Step 4: fit a MVG from and denote its covariance

matrix as

' 1

{ }

t

M i i

y

2

slide-115
SLIDE 115

Lin ZHANG, SSE, 2016

  • Step 5: the quality qi of patch i is measured as

IL‐NIQE index

   

1 ' ' 1 2 1 1

2

T i i i

q

            v y v y

Such a metric is inspired from the Bhattacharyya distance

  • Step 6: visual saliency guided quality pooling
  • High salient patches are given high weights
  • Patch saliency si is computed as the sum of saliency values covered by

patch i

  • For saliency computation, we use the Spectral Residual approach [1]

1 1

/

t t

M M i i i i i

q q s s

 

 

 

[1] X. Hou and L. Zhang. Saliency detection: a spectral residual approach. CVPR’07, 1-8, 2007.

slide-116
SLIDE 116

Lin ZHANG, SSE, 2016

Offline pristine model learning

… pristine images n high-contrast patches …

patch extraction

n feature vectors

feature extraction

 



MVG parameters and

MVG fitting

Online quality evaluation of a test image

test image k image patches …

patch extraction

k feature vectors

feature extraction

 



quality score computation for each patch

1 2

, ,...,

k

q q q

final quality score pooling

1

/

k i i

q q k

 

μ

slide-117
SLIDE 117

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Background introduction
  • Our proposed method: IL‐NIQE
  • Motivations and our contributions
  • NIS‐induced quality‐aware features
  • Pristine model learning
  • IL‐NIQE index
  • Experimental results
  • Summary
slide-118
SLIDE 118

Lin ZHANG, SSE, 2016

Protocol

  • Protocol for experiments
  • Experiments are conducted on TID2013, CSIQ, LIVE, LIVE

Multiply‐Distortion

  • Spearman rank order correlation coefficient (SRCC) and

Pearson linear correlation coefficient (PLCC)

YES 1 225 LIVE MD2 YES 1 225 LIVE MD1 NO 5 799 LIVE NO 6 866 CSIQ YES 24 3000 TID2013 Contains multiply‐ distortions? Distortion Types No. Distorted Images No. Dataset

Benchmark image datasets used

slide-119
SLIDE 119

Lin ZHANG, SSE, 2016

Protocol

  • IL‐NIQE was compared with
  • “opinion‐aware” approaches
  • BIQI, BRISQUE, BLIINDS2, DIIVINE, and CORNIA
  • “opinion‐unaware” approaches
  • NIQE and QAC
slide-120
SLIDE 120

Lin ZHANG, SSE, 2016

Cross‐datasets evaluation

  • Drawback of single‐database evaluation strategy
  • It cannot faithfully measure the prediction performance of

NR‐IQA indices since it cannot reflect the “blindness”

  • At the training stage the “opinion aware” approaches had

already met all the possible distortion types that would appear in the testing stage

  • Consequently, we will train the “opinion aware”

approaches on one dataset and test their performances on other rest datasets

slide-121
SLIDE 121

Lin ZHANG, SSE, 2016

Cross‐datasets evaluation—Training on LIVE

TID2013 CSIQ MD1 MD2 SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC BIQI 0.394 0.468 0.619 0.695 0.654 0.774 0.490 0.766 BRISQUE 0.367 0.475 0.557 0.742 0.791 0.866 0.299 0.459 BLIINDS2 0.393 0.470 0.577 0.724 0.665 0.710 0.015 0.302 DIIVINE 0.355 0.545 0.596 0.697 0.708 0.767 0.602 0.702 CORNIA 0.429 0.575 0.663 0.764 0.839 0.871 0.841 0.864 NIQE 0.311 0.398 0.627 0.716 0.871 0.909 0.795 0.848 QAC 0.372 0.437 0.490 0.708 0.396 0.538 0.471 0.672 IL‐NIQE 0.493 0.586 0.813 0.852 0.891 0.902 0.882 0.895

Evaluation results when being trained on LIVE

slide-122
SLIDE 122

Lin ZHANG, SSE, 2016

Cross‐datasets evaluation—Training on LIVE

BIQI BRISQUE BLIINDS2 DIIVINE CORNIA NIQE QAC IL‐ NIQE

SRCC 0.458 0.424 0.424 0.435 0.519 0.429 0.402 0.598 PLCC 0.545 0.548 0.525 0.595 0.643 0.512 0.509 0.672

Weighted‐average performance derived from last table

slide-123
SLIDE 123

Lin ZHANG, SSE, 2016

Cross‐datasets evaluation—Training on TID2013

Evaluation results when being trained on TID2013

LIVE CSIQ MD1 MD2 SRCC PLCC SRCC PLCC SRCC PLCC SRCC PLCC BIQI 0.047 0.311 0.010 0.181 0.156 0.175 0.332 0.380 BRISQUE 0.088 0.108 0.639 0.728 0.625 0.807 0.184 0.591 BLIINDS2 0.076 0.089 0.456 0.527 0.507 0.690 0.032 0.222 DIIVINE 0.042 0.093 0.146 0.255 0.639 0.669 0.252 0.367 CORNIA 0.097 0.132 0.656 0.750 0.772 0.847 0.655 0.719 NIQE 0.906 0.904 0.627 0.716 0.871 0.909 0.795 0.848 QAC 0.868 0.863 0.490 0.708 0.396 0.538 0.471 0.672 IL‐NIQE 0.898 0.903 0.813 0.852 0.891 0.902 0.882 0.895

slide-124
SLIDE 124

Lin ZHANG, SSE, 2016

Cross‐datasets evaluation—Training on TID2013

Weighted‐average performance derived from last table

BIQI BRISQUE BLIINDS2 DIIVINE CORNIA NIQE QAC IL‐ NIQE

SRCC 0.074 0.384 0.275 0.172 0.461 0.775 0.618 0.860 PLCC 0.250 0.491 0.349 0.251 0.527 0.821 0.744 0.881

slide-125
SLIDE 125

Lin ZHANG, SSE, 2016

  • We have the following findings
  • “Opinion aware” indices depend much on the training dataset; it

can be seen that these approaches perform better when being trained on LIVE than when being trained on TID2013

  • The proposed method IL‐NIQE can achieve the best results nearly

in all cases

  • The prominent performance of IL‐NIQE indicates that if being

designed properly, an “opinion unaware” approach could obtain much better prediction performance than their “opinion aware” counterparts

Cross‐datasets evaluation

slide-126
SLIDE 126

Lin ZHANG, SSE, 2016

Contents

  • Problem definition
  • Full reference image quality assessment
  • No reference image quality assessment
  • Summary
slide-127
SLIDE 127

Lin ZHANG, SSE, 2016

  • The research in IQA aims to propose computational

models to compute the image quality in a subjective‐ consistent manner

  • IQA problems can be classified as FR‐IQA, RR‐IQA, and

NR‐IQA problems according to the availability of the reference information

  • Quality scores predicted by the modern FR‐IQA

methods can be highly consistent with the subjective ratings

  • There is still a large room for development of NR‐IQA

methods

Summary

slide-128
SLIDE 128

Lin ZHANG, SSE, 2016

Thanks for your attention