HDR images acquisition: artifacts removal dr. Francesco Banterle - - PowerPoint PPT Presentation

hdr images acquisition artifacts removal
SMART_READER_LITE
LIVE PREVIEW

HDR images acquisition: artifacts removal dr. Francesco Banterle - - PowerPoint PPT Presentation

HDR images acquisition: artifacts removal dr. Francesco Banterle francesco.banterle@isti.cnr.it things can go wrong Things can move What happens if the camera moves; not stable ground, handled photography (no tripod), etc.


slide-1
SLIDE 1

HDR images acquisition: artifacts removal

  • dr. Francesco Banterle

francesco.banterle@isti.cnr.it

slide-2
SLIDE 2

things can go wrong…

slide-3
SLIDE 3

Things can move

  • What happens if…
  • the camera moves; not stable ground, handled

photography (no tripod), etc.

  • especially bad for long exposure images!
  • the scene is not static; moving objects,

background, etc…

slide-4
SLIDE 4

a moving camera…

slide-5
SLIDE 5

Moving camera

  • When the camera moves (even small movements) and

the scene is static, the final HDR image will be blurry

slide-6
SLIDE 6

Moving camera

  • What to do?
  • Before merging, LDR images need to be aligned to

a reference

  • How to select a reference?
  • Typically the image with the highest number of

well exposed pixels

  • Typically working in group of three images;

hierarchical

slide-7
SLIDE 7

Moving camera

  • Edges can vary at different exposure times:
slide-8
SLIDE 8

Median Threshold Binary Alignement

  • MTB, a feature descriptor, is a binary mask:
  • compute the luminance median value, M
  • Then MTB is defined as:
  • It is exposure-time invariant!

MTB(x) = ( 1 if L(x) > M

  • therwise
slide-9
SLIDE 9

MTB Alignement

slide-10
SLIDE 10

MTB Alignement

  • Hierarchical registration -

setup:

  • image pyramid
  • max displacement is

2^depth

slide-11
SLIDE 11

MTB Alignement

  • Hierarchical registration:
  • At level n, translation of

testing in X and Y (-1, 0, +1)

  • Check the match with

XOR

  • Repeat for level n+1 to

depth

+1

slide-12
SLIDE 12

MTB Alignment: handling camera rotations

  • The basic method does not handle rotation, only image

translations

  • Brute force approach:
  • Run MTB alignment
  • Rotate the testing mask at different degrees and do

XOR test. It requires a GPU implementation to achieve fast results

  • Refinement; reapplying MTB Alignment
slide-13
SLIDE 13

MTB Alignment: handling camera rotations

slide-14
SLIDE 14

Local Features Alignment

  • Detect salient points in an

image; i.e. corners or key- points:

  • DoG pyramid method
  • Harris corner detector
  • SUSAN corner detector
  • etc…
slide-15
SLIDE 15

Local Features Alignment

  • For each key-point:
  • Compute a local

descriptor of the image around it

slide-16
SLIDE 16

Local Features Alignment

slide-17
SLIDE 17

Local Features Alignment

  • After matching —> finding a transformation H
  • H needs to map 2D coordinates between image0

and image1:

  • H has to be a homography

  x0 y0 1   = H   x1 y1 1  

slide-18
SLIDE 18

Local Features Alignment

  • A homography is defined as:
  • So 8 matches (minimum) are required to estimate H:
  • better more points to avoid noise
  • better to use RANSAC to avoid outliers
  • H estimation requires to solve a linear system + non-linear
  • ptimization

H =   h00 h01 h02 h10 h11 h12 h20 h21 1  

slide-19
SLIDE 19

Local Features Alignment

  • Once, H is computed, pixels in image1 to be aligned

to image0 need to be warped: for i=0 to height for j= 0 to width end end

(u, v) = H[i, j, 1]T image0

1(i, j) = image1(u, v)

slide-20
SLIDE 20

Local Features Alignment

slide-21
SLIDE 21

Local Features Alignment: failure cases

  • Homography —> planar scene
  • all objects cannot be aligned when they have

different depths —> parallax problem!

slide-22
SLIDE 22

Local Features Alignment: failure cases

Layer 0 Layer 1

slide-23
SLIDE 23

Local Features Alignment: failure cases

slide-24
SLIDE 24

a moving scene…

slide-25
SLIDE 25

Ghosts

Lux

0.1305 0.9384 6.746 48.5 348.7

HDR Merge

slide-26
SLIDE 26

Ghosts

slide-27
SLIDE 27

Deghosting: reference-based

  • Idea: to choose an LDR image as reference, and to

detect ghost based on the reference

  • Selection, how?
  • Manual: select an image which has a good (from

an artistic point of view) scene composition

  • Automatic: image that maximizes well-exposed

pixels

slide-28
SLIDE 28

Deghosting: reference-based

  • Now that we have a reference…
  • Weighting other exposure images based on the

selected reference —> weights to be used in the merging w = a(r)2 a(r)2 + ✓

p−r r

◆2 a(x) = ( 0.058 + 0.68(x − 0.85) if x ≤ 0.85 0.04 + 0.12(1 − x)

  • therwise
slide-29
SLIDE 29

Deghosting: reference-based

a = 0.058

slide-30
SLIDE 30

Deghosting: reference-based

without deghosting with deghosting

slide-31
SLIDE 31

Deghosting: MTB-based

  • Idea: the MTB descriptor is invariant
  • Selection, how?
  • Manual: select an image which has a good (from

an artistic point of view) scene composition

  • Automatic: image that maximizes well-exposed

pixels

slide-32
SLIDE 32

Deghosting: MTB-based

slide-33
SLIDE 33

Deghosting: MTB-based

ghost(i, j) = ( 1 if M(i, j) > 0 ∧ M(i, j) < N

  • therwise
slide-34
SLIDE 34

Deghosting: MTB-based

slide-35
SLIDE 35

Deghosting: MTB a glimpse

  • To give higher weights to better exposed blocks

without deghosting with deghosting

slide-36
SLIDE 36

Deghosting:

  • ther approaches
  • Other approaches to deghosting:
  • Background extraction: many exposure images

are needed to achieve good quality results

  • Optical Flow
slide-37
SLIDE 37

What to do?

  • When everything moves there is a typical strategy:
  • First step: global estimation (MTB, Local

Features, etc…)

  • Second step: removing ghosts with a ghost

removal technique

  • This approach may be suboptimal, not solving the

whole problem

slide-38
SLIDE 38

lens flare…

slide-39
SLIDE 39

Veiling Glare

  • Camera optics, lenses, are generally designed for:
  • 2-3 orders of magnitude
  • 24-bit sensors or 35mm film
slide-40
SLIDE 40

Veiling Glare

Image Sensor Lens Scene

slide-41
SLIDE 41

Veiling Glare

Image Sensor Lens Scene

slide-42
SLIDE 42

Veiling Glare

  • OK, we have more light that should be there… what

is the real problem?

  • Reducing the dynamic range of the scene!
slide-43
SLIDE 43

Veiling Glare

slide-44
SLIDE 44

Veiling Glare: A Capturing Approach

  • Characterization of the glare of a particular camera
  • Special glare capturing
  • Glare removal
slide-45
SLIDE 45

Veiling Glare: Characterization

  • Measuring the glare of a camera at given aperture:
  • dark room
  • point light source; e.g. LED
  • capturing an HDR image
slide-46
SLIDE 46

Veiling Glare: Characterization

−15 −10 −5 5 10 15 −2 2 4 6 8 10 12 14 16 18

Pixel distance PSF

slide-47
SLIDE 47

Veiling Glare: Acquisition

  • Block glaring mask in front o the camera, e.g. a

30x30 mask

  • Moving the mask in X and Y planes
  • 6x6 HDR captures —> a lot of data!
slide-48
SLIDE 48

Veiling Glare: capturing approach

slide-49
SLIDE 49

Veiling Glare: capturing approach

slide-50
SLIDE 50

Veiling Glare: capturing approach

slide-51
SLIDE 51

Veiling Glare: capturing approach

slide-52
SLIDE 52

Veiling Glare: capturing approach

slide-53
SLIDE 53

Veiling Glare: capturing approach

slide-54
SLIDE 54

Veiling Glare: glare removal

Scene Mask PSF Recorded Image For removing glare, this process has to be inverted!

slide-55
SLIDE 55

Veiling Glare: results

from the paper “Veiling glare high dynamic range imaging”. Eino-Ville Talvala, Andrew Adams, Mark Horowitz, Marc Levoy. ACM SIGGRAPH 2007 Papers Program.

slide-56
SLIDE 56

Veiling Glare: a post-processing approach

  • The previous method produces high quality results!
  • There are some disadvantages:
  • Many pictures to take
  • The scene has to be static
  • Characterization of the PSF of the camera
slide-57
SLIDE 57

Veiling Glare: a post-processing approach

  • Main steps:
  • Estimate the PSF
  • Generate the glare image
  • Remove the glare image
slide-58
SLIDE 58

Veiling Glare: PSF Estimation

  • Compute image luminance, L
  • Threshold L to identify:
  • hot pixels (bright ones); source of glare
  • dark pixels (dark ones); “veiled”
slide-59
SLIDE 59

Veiling Glare: PSF Estimation

slide-60
SLIDE 60

Veiling Glare: PSF Estimation

  • where rij is the distance between the hot pixel Pj

and the minimum pixel Pi. Pi = X

j

Pj ✓ C0 + C1 rij + C2 r2

ij

+ C2 r3

ij

◆ Pi = C0 X

j

Pj + C1 X

j

Pj rij + C2 X

j

Pj r2

ij

+ C3 X

j

Pj r3

ij

slide-61
SLIDE 61

Veiling Glare: PSF Estimation

−15 −10 −5 5 10 15 −2 2 4 6 8 10 12 14 16 18

Pixel distance PSF

slide-62
SLIDE 62

Veiling Glare: Removing Glare

  • Input: Icr (image with glare), PSF
  • Output: Iout (image glare-free)
  • Algorithm:
  • Create a black image, Fcr
  • For each hot pixel in Icr, multiply by PSF and add the

contribution to Fcr

  • Iout = Icr - Fcr
slide-63
SLIDE 63

Veiling Glare: Glare Image

slide-64
SLIDE 64

Veiling Glare: Removing Glare

slide-65
SLIDE 65

Questions?