measuring image quality

MeasuringImageQuality HeathNielson MeasuringQuality What - PowerPoint PPT Presentation

MeasuringImageQuality HeathNielson MeasuringQuality What Iden7fyandmeasurea9ributesofanimage thatcanbeusedtodeterminewhetherthe


  1. Measuring
Image
Quality
 Heath
Nielson


  2. Measuring
Quality
 What 
 • Iden7fy
and
measure
a9ributes
of
an
image
 that
can
be
used
to
determine
whether
the
 perceived
quality
meets
the
expecta7ons
of
 the
organiza7on.


  3. Measuring
Quality
 Why 
 • Should
be
part
of
any
document
processing
 system
 • Guarantee
consistency
 • Useful
for
iden7fying
upstream
process
 problems
 – Manual
 – Automated


  4. Measuring
Quality
 How 
 • Subjec7ve
 – Easy
to
do
 – Not
always
predictable
nor
consistent
 • Objec7ve
 – Predictable
and
consistent
 – Hard
to
measure


  5. Audit
 Brute
Force


  6. Audit
 Excep7on
Based


  7. Contribu7ng
factors
affec7ng

quality
 • State
of
original
document
 • Digi7za7on
 – Resolu7on
 – Ligh7ng
 – Exposure
 – Focus
 • Post‐processing
 – Rota7on
 – Cropping
 – Contrast
enhancement
 – Lossy
compression


  8. Quality
Standards
 DIRT
(Digital
Image
Research
Team)
 • Composed
of
opera7onal
and
development
 personnel
 • Iden7fy
image
a9ributes
affec7ng
quality
 • Provide,
where
possible,
metrics
to
measure
 those
a9ributes
 • Determine
acceptable
ranges
for
a9ributes
 • Provide
tools
and
training
to
facilitate
 consistent
quality


  9. Quality
Standards
 DIRT
Specifica7on 
 • Defines
image
a9ributes
and
desirable
values
for
 each
 – Tonal
Range
 – File
format
 – Tonal
Resolu7on
 – File
name
 – Even
Exposure
 – Dimensions
 – Spa7al
Resolu7on
 – Size
 – Contrast
 – Complete
Capture
 – Colorspace
 – Orienta7on
 – Focus
 – Skew
 – Blur
 – Fixity


  10. Subjec7ve
Evalua7on
 • Direct
Numerical
Category
Scaling
 – Subjects
classify
images
into
a
number
of
categories
 – Usually
use
a
numerical
scale
e.g.
(1=Bad,
5=Good)
 – Subjects
tend
to
use
separate
internal
scales
 • Different
“types”
of
images
 • Different
types
of
distor7on
 • Func7onal
Measurement
Theory
 – Compares
image
quali7es
 – Subjects
indicate
which
image
is
preferred
 – More
evalua7ons
required
 • Each
sampled
image
must
be
compared
with
every
other
sampled
 image


  11. Subjec7ve
Evalua7on
 Jpeg
Compression
 • Sample
images
 – Randomly
selected
 – Includes
image
from
both
scanned
microfilm
and
 camera
capture
 – Each
image
compressed
at
several
predetermined
 se_ngs
 – The
original,
uncompressed
image
is
also
included
 • Images
were
presented
randomly
 • About
10%
of
the
7me
a
previously
evaluated
 image
is
presented
for
reevalua7on
 • Each
image
was
evaluated
by
3
different
subjects


  12. Subjec7ve
Evalua7on
 
 Jpeg
Compression
 • Direct
category
scaling
method
 – Asked
to
classify
images
on
a
scale
of
1‐5
 • Zoom
image
1‐100%
 • Pan
around
 • No
7me
limit
 • No
calibra7on
of
monitors
or
ambient
light


  13. Subjec7ve
Evalua7on
 
Jpeg
Compression


  14. Subjec7ve
Evalua7on
 Consistency
 80
 70
 60
 ‐4
 50
 ‐3
 ‐2
 Percent
 40
 ‐1
 0
 30
 +1
 +2
 20
 +3
 +4
 10
 0
 1
 2
 3
 4
 5
 6
 7
 8
 9
 10
 11
 12
 13
 14
 Users


  15. Subjec7ve
Evalua7on
 Raw
scores
 1400
 1200
 1000
 5
 800
 20
 Images
 35
 600
 80
 400
 Orig
 200
 0
 1
 2
 3
 4
 5
 Quality
 
(1=Worst,
5=Best)


  16. Objec7ve
Measures
 • No‐Reference
 – No
reference
image
available
 – “Blind”
reference
 • Reduced‐Reference
 – Set
of
extracted
features
from
reference
image
are
 used
 • Full‐Reference
 – A
complete
reference
image
is
available


  17. MSE
 • Measures
how
much
something
changed
but
 not
how
important
that
change
is
 • Ranges
from
0
(exactly
the
same)
to
infinity


  18. MSE
 JPEG
Quality
 4.6
 80
 35
 15.1
 5
 111.2


  19. MSE
 20000
 18000
 16000
 14000
 12000
 5
 10000
 20
 35
 8000
 80
 6000
 4000
 2000
 0
 0
 12
 24
 36
 48
 60
 72
 84
 96
 108
 120
 132
 144
 156
 168
 180
 192
 204
 216
 228
 240


  20. MSE
vs.
User
Evalua7on
 6000
 5000
 4000
 1
 Images
 3000
 2
 3
 4
 2000
 5
 1000
 0
 0
 12
 24
 36
 48
 60
 72
 84
 96
 108
 120
 132
 144
 156
 168
 180
 192
 204
 216
 228
 240
 MSE


  21. PSNR
 • Ra7o
between
the
maximum
possible
 power
and
the
power
of
corrup7ng
noise
 introduced
by
compression
 • Measured
using
the
logarithmic
decibel
 scale
 • Higher
values,
be9er
quality
 • Typical
values
30‐50db


  22. PSNR
 JPEG
Quality
 80
 41.5
 35
 36.4
 5
 27.7


  23. PSNR
 12000
 10000
 8000
 5
 6000
 20
 35
 80
 4000
 2000
 0
 0
 3
 6
 9
 12
 15
 18
 21
 24
 27
 30
 33
 36
 39
 42
 45
 48
 51
 54
 57
 60


  24. PSNR
vs
User
Evalua7on
 1800
 1600
 1400
 1200
 1000
 1
 Images
 2
 800
 3
 4
 600
 5
 400
 200
 0
 0
 3
 6
 9
 12
 15
 18
 21
 24
 27
 30
 33
 36
 39
 42
 45
 48
 51
 54
 57
 60
 PSNR


  25. Universal
Quality
Index
 • Proposed
by
Wang
and
Bovik
(2002)
 • A9empts
to
measure:
 – Loss
of
correla7on
 – Luminance
distor7on
 – Contrast
distor7on


  26. Universal
Quality
Index
 JPEG
Quality
 80
 0.943
 35
 0.821
 5
 0.428


  27. Universal
Quality
Index
vs
JPEG
 Universal
Quality
Index
vs
JPEG
 6000
 5000
 4000
 Images
 5
 3000
 20
 35
 2000
 80
 1000
 0
 0.00
 0.05
 0.10
 0.15
 0.20
 0.25
 0.30
 0.35
 0.40
 0.45
 0.50
 0.55
 0.60
 0.65
 0.70
 0.75
 0.80
 0.85
 0.90
 0.95
 1.00
 Universal
Quality
Index


  28. UQI
vs.
User
Evalua7on
 800
 700
 600
 500
 1
 Images
 400
 2
 3
 300
 4
 5
 200
 100
 0
 0.00
 0.05
 0.10
 0.15
 0.20
 0.25
 0.30
 0.35
 0.40
 0.45
 0.50
 0.55
 0.60
 0.65
 0.70
 0.75
 0.80
 0.85
 0.90
 0.95
 1.00
 Universal
Quality
Index


  29. Conclusion
 • Refine
subjec7ve
results
 • Correlate
subject
results
to
objec7ve
 – Evaluate
other
published
quality
metrics
 – Define
our
own


Recommend


More recommend