SLIDE 1 Laurent Risser February 2015 - Vienna
Spatially varying metrics in diffeomorphic image registration
Laurent Risser1 François-Xavier Vialard2
Laurent Risser February 2015 - Vienna
1CNRS, Institut de Mathématiques de Toulouse 2Université Paris Dauphine, CEREMADE
01/22
SLIDE 2 Laurent Risser February 2015 - Vienna
Introduction – brief overview of the LDDMM formalism
02/22
Inputs:
- Template (or source) image T
- Target image In
- A metric V which parameterizes how to compare the images
Output:
- Velocity field v(t), which encodes the optimal mapping between T and In
t ∈ [0,1]
Minimized energy: [Beg et al, IJCV 2005] Registration algorithm: [Beg et al, IJCV 2005]
- Initiate v(t) as equal to 0
- Perform a gradient descent with the gradient
- … and the momentum
where Tt and It are T and In transported at time t though Φ, respectively T In
SLIDE 3
Laurent Risser February 2015 - Vienna
Introduction – Motivations
03/22
“Best” K? Joint work with F.X. Vialard and other collaborators:
[Risser, et al, TMI 2011], [Bruveris, et al, SIAM MMS 2012]: Sum of Gaussian kernels [Risser et al, MedIA 2012]: Sliding motion [Schmah et al, MICCAI 2013]: Mathematical interpretation of LDDMM with spatially-varying metrics [Vialard and Risser, MICCAI 2014]: A general framework to learning spatially-varying metrics [Ongoing work]: Efficient schemes to learn spatially-varying metrics on real 3D data
Grey matter - 36 weeks Grey matter - 43 weeks Deformations obtained using different kernels K
Influence of the kernel K: [Risser, et al, TMI 2011] T In
SLIDE 4
Laurent Risser February 2015 - Vienna
Introduction – Talk overview
04/22
Talk overview: 1) From LDDMM to LIDM - interpreting spatially varying metrics in LDDMM 2) A general framework for spatially varying metrics a) Motivations b) General framework 3) Learning spatially varying metrics a) A very first strategy b) Strategy of [Vialard & Risser, MICCAI 2014] c) A new faster strategy d) Reducing the problem’s dimension 4) Results
SLIDE 5 Laurent Risser February 2015 - Vienna
1) From LDDMM to LIDM - Interpreting spatially varying metrics in LDDMM
05/22
Energy minimized in LDDMM and LIDM:
LDDMM: Spatial (Eulerian) velocity Consider point x at time 0 (in image I)
- LDDMM: motion of x at time t → driven by v(t) at point Φt o x
- LIDM: motion of x at time t → driven by v(t) at point x
→ Diffeomorphic registration formalism adapted to spatially varying metrics
LIDM: Convective velocity
SLIDE 6 Laurent Risser February 2015 - Vienna
1) From LDDMM to LIDM - Interpreting spatially varying metrics in LDDMM
06/22
As shown in [Schmah et al, MICCAI 2013], for a given metric:
- Diffeomorphisms differ between LIDM and LDDMM
- Final deformation is the same for LIDM and LDDMM
t=0 t=0.25 t=0.5 t=0.75 t=1
→ Final deformations of LDDMM registration with spatially varying metrics can be interpreted as what would be obtained using LIDM with the same metric (although the deformation path doesn’t make sense in LDDMM)
Motion of x at time t → smoothed at point x and not Φt o x
- Clearly not adapted for large deformations
- OK for small to reasonably large deformations
SLIDE 7
Laurent Risser February 2015 - Vienna
1) From LDDMM to LIDM - Interpreting spatially varying metrics in LDDMM
07/22
Results obtained using LIDM in [Schmah et al, MICCAI 2013] using very intuitive parameters: … let’s now define metrics which make sense for real images
C++ code can be freely downloaded at: http://sourceforge.net/projects/utilzreg/ (with multi-kernel LDDMM, Geodesic shooting, …)
SLIDE 8
Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics - motivation
08/22
Typical application: Registration of 3D MR brain images
SLIDE 9 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics - motivation
08/22
Spatially homogeneous smoothing kernels:
- Typically, smoothing using ad-hoc Gaussian kernels (or sum of Gaussian kernels)
- Pros: limited amount of kernels to tune, relatively intuitive parameters tuning.
- Cons: Not necessarily adapted to the different brain structures
34% 13% 2% 34% 13% 2% 34% 13% 2%
SLIDE 10 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics - motivation
08/22
34% 13% 2% 34% 13% 2% 34% 13% 2%
As discussed in e.g. [Simpson et al, MICCAI 2013]: It is interesting to more or less favour the deformations according to their location.
SLIDE 11 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics - motivation
08/22
34% 13% 2%
Smoothing with preferential directions has also been well explored in classic registration algorithms (e.g. optical flow)
SLIDE 12 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics - motivation
09/22
Strategy we will explore in this talk:
- Learning relations between the motion of pairs of points
- Using these relations to smooth the deformations
Local relations
SLIDE 13 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics - motivation
09/22
Slighly related motion No relation
Local relations Long distance relations
Strategy we will explore in this talk:
- Learning relations between the motion of pairs of points
- Using these relations to smooth the deformations
SLIDE 14 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics - motivation
09/22
Motion in opposite directions Slighly related motion No relation
Local relations Long distance relations Direction based relations
Strategy we will explore in this talk:
- Learning relations between the motion of pairs of points
- Using these relations to smooth the deformations
SLIDE 15 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics – general overview
10/22
Mathematical property the smoothing kernel must ensure: Instead of a spatially-homogeneous K, we propose to use:
- Initiate v(t) as equal to 0
- Perform a gradient descent with the gradient
- … and the momentum
Same registration algorithm as in [Beg et al, IJCV 2005], except the smoothing part. The Hilbert space of vt has to be embedded in the Banach space of C1 vector fields where:
- K is a spatially-homogeneous kernel (typically Gaussian)
- M is a positive-semidefinite operator
^
SLIDE 16 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics – general overview
11/22
How to smooth the momentum Pt at point with K ? Smoothing Pt with K :
- Convolution with K
- Smoothing with M
- Convolution with K again
^ ^
SLIDE 17 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics – general overview
11/22
Smoothing Pt with K :
- Convolution with K
- Smoothing with M
- Convolution with K again
^ ^ How to smooth the momentum Pt at point with K ? Same properties in the image domain Same properties in directions x, y, z
SLIDE 18 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics – general overview
11/22
Smoothing Pt with K :
- Convolution with K
- Smoothing with M
- Convolution with K again
^ ^ How to smooth the momentum Pt at point with K ? Different properties in the image domain Different properties in directions x, y, z
→ Local deformations in a given direction may be related to deformations other directions and other locations Negative relation Positive relation
SLIDE 19 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics – general overview
11/22
Smoothing Pt with K :
- Convolution with K
- Smoothing with M
- Convolution with K again
^ ^ How to smooth the momentum Pt at point with K ? Same properties in the image domain Same properties in directions x, y, z
SLIDE 20 Laurent Risser February 2015 - Vienna
2) A general framework for spatially varying metrics – general overview
12/22
pj = ( xj , yj , zj ) pi = ( xi , yi , zi ) Consider a pair of voxels: Matrix M: Smoothing a 3D vector field Γt with M : Vectorize Γt → Matrix/vector multiplication → Recompose the vector field Important note: Ideally, M should be normalized
1 N N+1 2N 2N+1 3N ... ... ... ... 1 N N+1 2N 2N+1 3N ... ...
ux (pj ) ux (pi )
M(ux (pj ),uy (pi ))
uy (pj ) uz (pj ) uy (pi ) uz (pi )
i N+i 2N+i ... ... ... M(uy (pj ),ux (pi )) M(ux (pj ),ux (pi )) … … … … … …
SLIDE 21 Laurent Risser February 2015 - Vienna
3) Learning matrix M – A first strategy
13/22
First strategy:
1.
Register T on the N images In with M equals to Identity (equivalent to [Beg et al, IJCV 2005])
2.
Average the estimated velocity fields vn(t,x) on the temporal axis → vn(x)
3.
Compute the matrix M using the N velocity fields vn(x) Pros:
- Relatively simple to implement
- Relations taken into account
Cons:
- No regularisation, and therefore no prior, on M
- Particularly dependent on K
- At some locations, there may have no information
Consider a template image T defined on a (discrete) domain Ω and N reference images In T I1 I2 IN … ^
SLIDE 22
Laurent Risser February 2015 - Vienna
3) Learning matrix M – strategy of [Vialard & Risser, MICCAI 2014]
14/22
Energy used to register T and In in [Beg et al, IJCV 2005]: Here we use K M K instead of K and use the notation for ^ ^ where: Considering the template T and the N reference images In , we learn M by minimising : T I1 I2 IN … M M Id Id M M Id Id
SLIDE 23 Laurent Risser February 2015 - Vienna
Learning algorithm:
1.
Initiate M as equal to identity
2.
While not convergence of M, repeat:
1.
Register T on the N images In with the current M
2.
Compute
3.
Update M:
4.
Normalize M
3) Learning matrix M – strategy of [Vialard & Risser, MICCAI 2014]
15/22
where: Remark that || log(M) ||2 comes from a Riemannian metric denoted by g [Arsigny et al, SIAM MAA] → We consider S++ endowed with the inner product at S given by tr(S-1dSS-1dS) We can show that: Bottleneck when M gets large!!!
SLIDE 24 Laurent Risser February 2015 - Vienna
Instead of working on the space of SP matrices we work on the larger space of Mn( ) matrices Map Π : Mn ( ) Sn
+ defined by Π(M)=MMt
→ Riemannian submersion between:
- Mn ( ) endowed with the L2 metric
- Sn
+ endowed with the Wasserstein metric (this defines the metric)
Optimal results is not changed as the functional is invariant w.r.t. the right multiplication by On( ) By using the new regularizing term the new gradient is: where the corresponding kernel M is NTN .
3) Learning matrix M – a new strategy
16/22
SLIDE 25 Laurent Risser February 2015 - Vienna
New learning algorithm:
1.
Initiate N as equal to identity
2.
While not convergence of N, repeat:
a.
M = NTN
b.
Register T on the N images In with the current M
c.
Compute
d.
Update N:
e.
Normalize N Note:
- The Wasserstein metric can degenerate, which is not the case for the Log Euclidian
metric.
- In our tests, this is not observed so far if reasonably low ε are chosen.
3) Learning matrix M – a new strategy
17/22
SLIDE 26 Laurent Risser February 2015 - Vienna
3) Learning matrix M – Keeping the problem dimension reasonable
18/22
For 3D medical images, M or N can be a huge matrix!!! → A solution:
- Control points of M considering homogeneously sampled on the image domain
- Projecting K ★ Pn(t) on a basis
Reconstruction of the velocity field: What is used to learn M or N: ^
SLIDE 27
Laurent Risser February 2015 - Vienna
3) Learning matrix M – Keeping the problem dimension reasonable
19/22
Learning M or N at a scale constrained by the grid step size … and registering the images at a lower scale Instead of using the kernel: we use: Information lost at the scales lower than the grid step size Projection operator :
SLIDE 28 Laurent Risser February 2015 - Vienna
4) Results – 3D brain images
20/22
Learning step:
- Subject 5 of the LPBA40 dataset registered on subjects 1 to 30
- M learned on a grid with a step size of 20mm
- K is a Gaussian kernel with σ = 10 mm
- Different weights β for each strategy
^
SLIDE 29 Laurent Risser February 2015 - Vienna
4) Results – 3D brain images
21/22
Registration:
- Subject 5 of the LPBA40 dataset registered on subjects 31 to 40
- Comparison with other diffeomorphic registration strategies
- Target overlap of segmented regions / Determinant of Jacobians
Affine Alignment ANTS-SyN - Parameters of [Klein et al NI 2009] LDDMM - Sum of Gaussian kernels [Risser et al TMI 2011] LDDMM – Gaussian kernels [Beg et al IJCV 2005] Proposed method with different M
SLIDE 30 Laurent Risser February 2015 - Vienna
This work was supported by:
- Chaire Havas-Dauphine Economie des nouvelles données
- AO1 grant MALAC3D from Université Paul Sabatier
- ANR DEMOS grant (S. Gadat, J. Bigot, J.M. Loubes)
THANK YOU!
22/22
To summarize:
- New approach to learn spatially-varying metrics in the LDDMM framework
- Learning strategy based on a gradient descent
- Encouraging results
Future work:
- Testing other priors on N
- Application of the new strategy on 3D brain images at a high resolution
- Dimensionality reduction techniques on N
- Statistical analysis of the metric parameters
SLIDE 31 Laurent Risser February 2015 - Vienna 22/22
References: [Beg et al, IJCV 2005]: Beg M.F., Miller M.I. Trouvé A., Younes L.: Computing Large Deformation Metric Mappings
via Geodesic Flows of Diffeomorphisms, IJCV 61(2), 2005
[Risser, et al, TMI 2011] : Risser L., Vialard F.X., Wolz R., Murgasova M., Holm D.D., Rueckert D., ADNI:
Simultaneous Multiscale Registration using Large Deformation Diffeomorphic Metric Mapping. IEEE TMI, 2011
[Bruveris, et al, SIAM MMS 2012] : Bruveris M., Risser L., Vialard F.X.: Mixture of Kernels and Iterated
semidirect Product of Diffeomorphisms Groups. SIAM Multiscale Modeling and Simulation 10 (4), 2012
[Risser et al, MedIA 2012] : Risser L., Vialard F.X., Baluwala H.Y., Schnabel J.A.: Piecewise-Diffeomorphic Image
Registration: Application to the Motion Estimation between 3D CT Lung Images with Sliding Conditions Medical Image Analysis, 2012
[Schmah et al, MICCAI 2013] : Schmah T., Risser L., Vialard F.X.: Left-invariant metrics for diffeomorphic image
registration with spatially-varying regularisation. MICCAI'13 - LNCS, 2013
[Vialard and Risser, MICCAI 2014] : Vialard F.X., Risser L.: Spatially-varying metric learning for diffeomorphic
image registration. A variational framework. MICCAI'14 - LNCS, 2014
[Simpson et al, MICCAI 2013] : Simpson, I.J.A., Woolrich, M.W, Cardoso, M.J., Cash, D.M., Modat, M.,
Schanbel, J.A., Ourselin, S.: A bayesian approach for spatially adaptive regularisation in non-rigid registration MICCAI’13 – LNCS, 2013
[Arsigny et al, SIAM MAA] : Arsigny V., Fillard P., Pennec X., Ayache N.:Geometric Means in a Novel Vector Space
Structure on Symmetric Positive-Definite Matrices. SIAM Journal on Matrix Analysis and Applications, 29(1), 2007
[Klein et al NI 2009] : Klein A. et al.: Evaluation of 14 nonlinear deformation algorithms applied to human brain MRI
- registration. Neuroimage 46 (3), 2009.
ANTS SyN: http://picsl.upenn.edu/software/ants/