Mutual Information Example - SSD 5 x 10 6 10 20 5 30 4 40 - - PowerPoint PPT Presentation

mutual information example ssd
SMART_READER_LITE
LIVE PREVIEW

Mutual Information Example - SSD 5 x 10 6 10 20 5 30 4 40 - - PowerPoint PPT Presentation

Mutual Information Example - SSD 5 x 10 6 10 20 5 30 4 40 50 3 60 2 70 1 80 R I 90 10 20 30 40 50 60 70 Mutual Information Summary Statistical Tool for Dependence or of two variables Used as a tool for scoring


slide-1
SLIDE 1

Mutual Information

slide-2
SLIDE 2

Example - SSD

10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 1 2 3 4 5 6 x 10

5

I R

slide-3
SLIDE 3

Mutual Information Summary

  • Statistical Tool for Dependence or of two variables
  • Used as a tool for scoring similarity between data sets.
slide-4
SLIDE 4

Sum of Squared Differences (SSD)

  • SSD is optimal in the sense of ML when
  • 1. Constant brightness assumption
  • 2. Additive Gaussian noise

       

   

E y x

y x v y u x v u

, 2

, , , SSD R I

slide-5
SLIDE 5

SSD Optimality

For each pixel:

   

 

   

 

     

                  

2 2 2

2 , , exp 2 1 , , , P , ~ , ,

n n n

y x v y u x v u y x v y u x N y x v y u x     R I R I R I

slide-6
SLIDE 6

SSD Optimality – cont.

For all pixels in area E:

 

   

   

           

  

  

                                 

A y x n A y x n n A y x

y x v y u x const y x v y u x v u v u y x v y u x v u

, 2 2 , 2 2 ,

2 , , 2 , , exp 2 1 log , logP , , , P , P     R I R I R I R I R I

   

     

          

A y x v u

y x v y u x v u

, 2 ,

, , min , logP max R I R I

slide-7
SLIDE 7

Example - SSD

10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 1 2 3 4 5 6 x 10

5

I R

slide-8
SLIDE 8

Normalized Cross-Correlation

  • NCC is optimal in the sense of ML when
  • 1. linear relationship between the images
  • 2. Additive Gaussian noise

         

  

  

      

A y x A y x A y x

y x v y u x y x v y u x v u

, 2 , 2 ,

, ~ , ~ , ~ , ~ , NCC R I R I

    

      y x v y u x , , R I

R R R I I I     ~ ~

slide-9
SLIDE 9

Example - NCC

R I’=0.5I+30

10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 x 10

5

SSD

10 20 30 40 50 60 70 10 20 30 40 50 60 70 80 90

  • 0.6
  • 0.4
  • 0.2

0.2 0.4 0.6 0.8

NCC

true location

slide-10
SLIDE 10

The Joint Histogram

Intensity of Reference x Intensity of Transformed Target y SSD Optimum Y=X NCC Optimum Y=aX+b

slide-11
SLIDE 11

Classic Use of Mutual Information in Registration of Data Sets

MRI CT ANGIO PET ATLAS EEG

slide-12
SLIDE 12

Cross Modality Registration

T Q (Vo, Uo,T) Vo Uo T = iter(T,Q)

How well can Vo Determine Uo? Do they have common information?

slide-13
SLIDE 13

Information Theory - Entropy

50 100 150 200 250 200 400 600 800 1000

Shannon, “A mathematical theory of communication”, 1948

50 100 150 200 250 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000

H=7.4635 H=0

     

 

A a A A

a p a p A H log

Wide Distribution  High entropy  Uncertainty

slide-14
SLIDE 14

Joint Entropy

     

 

 

B b A a AB AB

b a p b a p B A H , log , ,

AB

p

 

B A H ,

7.399 11.731

Higher Entropy  more uncertainty  Lower mutual information value

slide-15
SLIDE 15

Information Theory – Mutual Information

     

 

 

B b A a AB AB

b a p b a p B A H , log , ,

               

A | B B B | A A B A, B A B A, H H H H H H H I       

H(A) H(B) I(A,B) In Two Variable we define Join Entropy in a similar way High Mutual information  shared information  How much entropy we lose because the parameters are couples

slide-16
SLIDE 16

B A B A

     

B A H B H A H B A I , ) , (   

B A B A B A

matching non matching

The closed the relationship is 1:1 between A/B the higher the MI

slide-17
SLIDE 17 50 100 150 200 250 200 400 600 800 1000

T T'

Poor Match Good Match

slide-18
SLIDE 18

Comparing an image to itself results in the entropy of the image  MI=5.53 It can not get any higher than that !!

slide-19
SLIDE 19
slide-20
SLIDE 20
slide-21
SLIDE 21

MRI CT ANGIO PET MI is most useful in cross modalities registration where basic feature may not correspond in true values

slide-22
SLIDE 22

Similar Regions / Symmetry Lines

slide-23
SLIDE 23

Mutual Information Summary

  • Statistical Tool for Dependence or of two variables
  • Used as a tool for scoring similarity between data sets.