Information CMPUT 615 Nilanjan Ray slides prepared from: - - PowerPoint PPT Presentation

information
SMART_READER_LITE
LIVE PREVIEW

Information CMPUT 615 Nilanjan Ray slides prepared from: - - PowerPoint PPT Presentation

Image Registration Using Mutual Information CMPUT 615 Nilanjan Ray slides prepared from: http://www.cse.msu.edu/~cse902/ Shannons Entropy Let, p i : probability of occurrence of i th event Information content of the even i : ( )


slide-1
SLIDE 1

Image Registration Using Mutual Information

CMPUT 615 Nilanjan Ray

slides prepared from: http://www.cse.msu.edu/~cse902/

slide-2
SLIDE 2

Shannon’s Entropy

  • Let, pi: probability of occurrence of ith event
  • Information content of the even i:
  • Shannon’s Entropy:

i i i

p p H ) / 1 log(

) / 1 log( ) (

i i

p p I How do you interpret these formulas?

slide-3
SLIDE 3

Interpretations

  • An infrequently occurring event provides more

information than a frequently occurring event

  • Entropy measures the overall uncertainly of occurrence of

events in a system

  • If a histogram (probability density function) is highly

peaked, entropy is low. The higher the dispersion in the histogram, the larger would be the entropy.

slide-4
SLIDE 4

Image Registration with Shannon’s Entropy

  • Generate a 2-D joint histogram p(i, j) for two images
  • If the two images are well registered, p(i, j) will be less

dispersed, and entropy will be low.

slide-5
SLIDE 5

Entropy for Image Registration

  • Using joint entropy for registration

– Define joint entropy to be: – Images are registered when one is transformed relative to the other to minimize the joint entropy – The dispersion in the joint histogram is thus minimized

j i

j i p j i p B A H

,

)] , ( log[ ) , ( ) , (

slide-6
SLIDE 6

Definitions of Mutual Information

  • Three commonly used definitions:

– 1) I(A,B) = H(B) - H(B|A) = H(A) - H(A|B)

  • Mutual information is the amount that the uncertainty in B (or

A) is reduced when A (or B) is known.

– 2) I(A,B) = H(A) + H(B) - H(A,B)

  • Maximizing the mutual info is equivalent to minimizing the

joint entropy (last term)

  • Advantage in using mutual info over joint entropy is it includes

the individual input’s entropy

  • Works better than simply joint entropy in regions of image

background (low contrast) where there will be low joint entropy but this is offset by low individual entropies as well so the overall mutual information will be low

slide-7
SLIDE 7

Definitions of Mutual Information II

– 3)

  • This definition is related to the Kullback-Leibler distance

between two distributions

  • Measures the dependence of the two distributions
  • In image registration I(A,B) will be maximized when the

images are aligned

  • In feature selection choose the features that minimize I(A,B) to

ensure they are not related.

b a

b p a p b a p b a p B A I

,

) ( ) ( ) , ( log ) , ( ) , (

slide-8
SLIDE 8

Additional Definitions of Mutual Information

  • Two definitions exist for normalizing Mutual

information:

– Normalized Mutual Information: – Entropy Correlation Coefficient:

) , ( ) ( ) ( ) , ( B A H B H A H B A NMI ) , ( 2 2 ) , ( B A NMI B A ECC

slide-9
SLIDE 9

Derivation of M. I. Definitions

) , ( ) ( ) ( ) | ( ) ( ) , ( therefore ) ( ) | ( ) , ( )) ( log( ) ( )] | ( log[ ) | ( ) , ( )) ( log( ) ( ) | ( ) ( )] | ( log[ ) | ( ) , ( ) | ( )) ( log( ) ( ) ( )] | ( log[ ) | ( ) , ( )]} ( log[ )] | ( {log[ )] ( ) | ( [ ) , ( )] ( ) | ( log[ )] ( ) | ( [ ) , ( ) ( ) | ( ) , ( where ), ) , ( log( ) , ( ) , (

, , , , ,

B A H B H A H A B H A H B A I B H B A H B A H b p b p b a p b a p B A H b p b p b a p b p b a p b a p B A H b a p b p b p b p b a p b a p B A H b p b a p b p b a p B A H b p b a p b p b a p B A H b p b a p b a p b a p b a p B A H

b a b a b a b a b a b a b a b a

slide-10
SLIDE 10

Properties of Mutual Information

  • MI is symmetric: I(A,B) = I(B,A)
  • I(A,A) = H(A)
  • I(A,B) <= H(A), I(A,B) <= H(B)

– info each image contains about the other cannot be greater than the info they themselves contain

  • I(A,B) >= 0

– Cannot increase uncertainty in A by knowing B

  • If A, B are independent then I(A,B) = 0
  • If A, B are Gaussian then:

) 1 log( ) , (

2 2 1

B A I

slide-11
SLIDE 11

M.I. for Image Registration

slide-12
SLIDE 12

M.I. for Image Registration

slide-13
SLIDE 13

M.I. for Image Registration

slide-14
SLIDE 14

M.I. Processing Flow for Image Registration

Pre-processing M.I. Estimation Image Transformation Probability Density Estimation Optimization Scheme Output Image Input Images

slide-15
SLIDE 15

Pseudo Code

  • Initialize the transformation parameters a; I is the template

image, J is the image to be registered to I.

  • Transform J to J(a)
  • Repeat

– Step 1: Compute MI(a) = mutual information between J(a) and I – Step 2: Find a by optimization so that MI(a+ a) > MI(a) – Step 3: Update transformation parameters: a = a + a – Step 4: Transform J to J(a)

  • Until convergence

What type of optimization can be applied here?