Inverses problems and Regularized Solutions Lecture 9 Part A - Yvon - - PowerPoint PPT Presentation

inverses problems and regularized solutions
SMART_READER_LITE
LIVE PREVIEW

Inverses problems and Regularized Solutions Lecture 9 Part A - Yvon - - PowerPoint PPT Presentation

Inverses problems and Regularized Solutions Lecture 9 Part A - Yvon JARNY Laboratoire de Thermocintique de Nantes (LTN), UMR CNRS 6607 Universit de Nantes PolytechNantes 44306 Nantes cedex 3 Metti5 Roscoff 1 June 13-18, 2011


slide-1
SLIDE 1

Inverses problems and Regularized Solutions

Lecture 9 Part A - Yvon JARNY

Laboratoire de Thermocinétique de Nantes (LTN), UMR CNRS 6607‐ Université de Nantes Polytech’Nantes – 44306 Nantes cedex 3

Metti5 – Roscoff June 13-18, 2011 1

slide-2
SLIDE 2

Introduction(1) Introduction(1)

Resolution of Inverse (Heat Transfer) Problems = a specific methodology based on the development of interactive computational-experimental process

Experiments C t l bl Inverse data processing Control problems Experimental design Mathematical modelling

Metti5 – Roscoff June 13-18, 2011 2

slide-3
SLIDE 3

Introduction(2) Introduction(2)

  • Inverse (heat transfer) problems are ill‐posed :
  • Solutions (theoretical and numerical)
  • might not exist for all data
  • might be not unique
  • might be unstable under data perturbations
  • Specific Regularization methods are required

Specific Regularization methods are required

Metti5 – Roscoff June 13-18, 2011 3

slide-4
SLIDE 4

Introduction (3) Introduction (3)

  • This lecture will be devoted to present and

to illustrate the need of regularization for g solving Inverse problems

  • And how to do it in practice
  • And how to do it in practice

It ill be organi ed in three parts

  • It will be organized in three parts

Metti5 – Roscoff June 13-18, 2011 4

slide-5
SLIDE 5

Content

  • Introduction
  • Inverse Problems– 3 Basic examples of linear Ill‐posed problems

– An initial state problem – A input heat source problem (semi‐infinite body) A 2 D i h bl – A 2‐D stationnary heat source problem

  • Mathematical Analysis of the linear inverse problem

– The Singular Value Decomposition (SVD) approach The Singular Value Decomposition (SVD) approach – Quasi‐solutions: existence and uniqueness conditions – Stability condition and regularized solutions‐

  • Regularization processes and Stability condition

– Regularization by truncation R l i ti b t i ti – Regularization by parametrization – Regularization by penalization – Iterative Regularization Iterative Regularization

  • Conclusions

Metti5 – Roscoff June 13-18, 2011 5

slide-6
SLIDE 6

First part: Ill posed problems First part: Ill-posed problems

  • Example -1: initial state problem

S i State equations

f

t t x t x x T t x t T          , 1 ), , ( ) , (

2 2

) 1 , (

2

L Y U  

Direct problem: x t  

f

t t t T t T     , ) , 1 ( ) , (

1 ) ( ) (   U T

AU Y U     Y U

p Linear mapping Output equation

1 ), ( ) , (    x x U x T

Inverse problem:

?

1Y

A U Y

Y

1 ), , ( ) (    x t x T x Y

f

?

1Y

A U Y   Y

Metti5 – Roscoff June 13-18, 2011 6

slide-7
SLIDE 7

Example 1: initial state problem Example ‐1: initial state problem Numerical results of the Direct Problem

) 05 . , ( ) (  

f

t x T x Y

) 1 ( ) (

2

x x x U  

Metti5 – Roscoff June 13-18, 2011 7

slide-8
SLIDE 8
  • Example1 : initial state problem

The solution of the direct problem is ) ( . ) , (

2 2

1

x e c t x T

n t n n n

 

) sin( 2 ) ( with x n x

n

  

= Set of orthogonal functions

) ( . ) ( so

2 2

1

x e c x Y

n t n n

f

  

 ) ( and

1

x c U(x)

n n

1 n nm m n

    , because

1 n

) ( , ) (

2 2

1

x e U x Y

n t n n

f

  

The solution of the inverse problem is

1 n

  

n t n n

Y e c

f

,

2 2

) ( , ) ( A

2 2

1 1

  • x

e Y Y U

n t n n n

f

 

 

Metti5 – Roscoff June 13-18, 2011 8 1 n

slide-9
SLIDE 9
  • Example -1: initial state problem

The operator A is linear, then for any output error Y Y Y  

 

2 2

n n t n

Y e U

f

   

 

   

,

2 2

The resulting variation of the solution will be Suppose (for simplicity)

) sin( 2 ) ( x N x Y    

Y   

Then

) sin( 2 ) ( x N x Y     

Y  

Y

Y e U e U

f f

t N t N

    

 

2 2 2 2

and  

Then any arbitrarily small output error, may induce a great variation on the solution

Y U

Y e U e U

N

     and ,  

may induce a great variation on the solution The stability condition is violated, thi i i iti l t t h t d ti bl i ill d this inverse initial state heat conduction problem is ill‐posed

Metti5 – Roscoff June 13-18, 2011 9

slide-10
SLIDE 10

Example 1: initial state problem Example ‐1: initial state problem Modeling equations

State equations T T   ) ( ) (

2

) , (

f

t

2

L Y U  

f f

t t t U t x T t t x t x x t x t              ), ( ) , ( , ), , ( ) , (

2

Direct problem: Linear mapping

f

U x  ), ( ) , (

) , (   x x T

Linear mapping (time convolution)

AU Y U     Y U

Output equation

f c

t t t x T t Y    ), , ( ) (

Metti5 – Roscoff June 13-18, 2011 10

slide-11
SLIDE 11

Example 2 : heat source problem

Semin-Infinite Body - Temperature response to a time varying heat flux - IE Model

(semi infinite body)

   

t à f s

) ,t ( t d U t f t U A t Y , ) ( ) ( ) )( ( ) (   

15 20 Semin-Infinite Body - Temperature response to a time varying heat flux - IE Model

The impulse response

5 10

0.5 2

1 exp( ) ( ) 4

s s

x f t k t t           

5 T 2 4 6 8 10 12 14 16 18 20

  • 10
  • 5

q(t) 2 4 6 8 10 12 14 16 18 20 time t*=t/tau

Metti5 – Roscoff June 13-18, 2011 11

slide-12
SLIDE 12

Example 3 : heat source problem (2-D steady state )

State equations

0 dans T   

1

0.5,0 1 ( ) (sin( ) 1) 1 T y y q y q y        

1

T

w

T T

 

( ) (sin( ) 1), 1 2 q y q y   

2 3

n T q 

 

   

O t t ti

4

q n 

  

Output equation

3

Y T  

Metti5 – Roscoff June 13-18, 2011 12

3

slide-13
SLIDE 13

Example 3 : heat source problem (2‐D steady state )

After standard discretization of the spatial variable, the resulting state and

  • utput equations take the form

 

1

( )

n n i i

q q y  R

     

1 4

*

w

A T B T B q  

 

Y C T 

 

1

( )

i i

q q y

 

1 1 n w i i

T T

 

Y C T 

1

( )

m m j j

Y Y x

      R

      

1 

      

1 1 4

( * )

w

Y C A B T B q  

the numerical solution

  • f the inverse problem

 

X

      

1 4

Y C A B q X q   

 

will require the inversion of = the sensitivity matrix

 

X

Metti5 – Roscoff June 13-18, 2011 13

slide-14
SLIDE 14

Second Part: Linear Inverse Problem Analysis

  • The three conditions of Hadamard (existence, uniqueness, stability) are

( , q , y) investigated for the linear inverse problem in the finite dimensional case.

  • The mathematical analysis will show that the concept of quasi‐solution

The mathematical analysis will show that the concept of quasi solution allows to satisfy the question of existence, but the uniqueness and stability conditions will remain unsatisfied.

  • Then some regularization is needed to build a unique and stable quasi‐

solution.

Metti5 – Roscoff June 13-18, 2011 14

slide-15
SLIDE 15

Si l V l D iti (SVD) h Singular Value Decomposition (SVD) approach

The linear inverse problem (finite dimensional case ) consists in finding p ( ) g solution of the matrix equation

n

U R

. Y A U 

and A matrix (m x n) are given Wh h i i l

m

Y R

When the matrix is rectangular , m ≠ n, the analysis of the Hadamard’s conditions is required. It is based on the singular value decomposition (SVD) of the matrix A: g p

t

A S

dim(W) = m x m; dim(V) = n x n; dim(S)= m x n V

d W th l

t

A S  W V

V and W are orthogonal

r = rank (A At) = rank (AtA) ≤ inf (m,n)

Metti5 – Roscoff June 13-18, 2011 15

slide-16
SLIDE 16

Numerical example m = 3; n = 2

if 1 i j r    

[W, S , V]= svd(A)

, if 1,.., 0, otherwise

i ij

i j r S       

A W S V A W S V

1

  • 1

2 1 0.1826 0.8944 -0.4082

  • 0.9129
  • 0.4082
  • 0.3651

0.4472 0.8165 2.4495 1.0000 0.4472 0.8944

  • 0.8944

0.4472

Find U: Y = W S Vt U

  • r WtY= Wt W S Vt U = S Vt U

t m

Z Y R W

t n

X U R V

  • r W Y= W W S V U = S V U

Let us introduce the new variables

t m

Z Y R   W

t n

X U R   V

The linear inverse problem becomes: Find X : Z = S X

Metti5 – Roscoff June 13-18, 2011 16

slide-17
SLIDE 17

Consequently, there are m algebraic equations to determine the n components of the vector solution X

, 1,.., , 1,..,

i i i j i

X Z i r X Z i r m            

1,.., j i j r n  

  

The condition of existence is clearly

0, 1,..,

i

Z i r m   

i

t

Z Y  W

*,

n

i i

Z W Y  

R

 

*

, 1,..

m i

W i r   R Im( A)= Then the existence condition to the solution of the inverse problem Then the existence condition to the solution of the inverse problem is characterized by the orthogonal property equations :

*

Im( ) , 0, 1,..,

i

Y A W Y i r m     

The uniqueness condition is clearly r = n , which is possible only if m

n 

Metti5 – Roscoff June 13-18, 2011 17

slide-18
SLIDE 18

Quasi‐solutions To satisfy the existence condition h l i i i i d d

2

( ) A  

  • the least squares criterion is introduced
  • and the quasi‐solutions are determined by

2

( )

m

J A Y    

R

argmin ( )

m

U J

 

R

Which gives:

m

R

  • r

t t t t

A AU A Y S SX S Z  

  • r

S SX S Z  Consequently, there are now n algebraic equations to determine the n components of the vector solution X determine the n components of the vector solution X

2

, 1,..,

i i i i

X Z i r       

h i di i

1,.., 1,..,

, 1,..,

j i j r n j r m

X Z i r n

   

       

the existence condition is always satisfied !

Metti5 – Roscoff June 13-18, 2011 18

slide-19
SLIDE 19

Numerical example m = 3; n = 2; r = 2

A W S V

Numerical example m 3; n 2; r 2

1

  • 1

2 1 0.1826 0.8944 -0.4082

  • 0.9129
  • 0.4082
  • 0.3651

0.4472 0.8165 2.4495 1.0000 0.4472 0.8944

  • 0.8944

0.4472

2 1 U Y AU                

With MATLAB:

1 1 2.1909             

U=A\Y

'* 0.4472 0 8944 Z W Y SX                

U=inv(S’*S)*S’*Y U=pinv(A)*Y

0.8944 * 0.4472 1 X U V X                  

p ( )

Metti5 – Roscoff June 13-18, 2011 19

slide-20
SLIDE 20

But if r < n , there are an infinite set of quasi‐solutions !

Z

1,.. 1,.., i i i i i r i r n i

Z X c 

  

 

 

V V

 

1 c i r n  

Is a set of arbitrary constant values

 

, 1,..,

i

c i r n  

Is a set of arbitrary constant values One way to satisfy the uniqueness condition, consists

  • in introducing some a priori estimate X
  • in introducing some a priori estimate
  • in determining in order to get the closest solution X* , i.e.

est

X  

, 1,..,

i

c i r n  

  • in minimizing the distance *

est

X X 

*

1 X X V

So they satisfy the orthogonal properties : And the unique quasi‐solution is therefore

* ,

, 0, 1,..,

i i est i

X X i r n     V

,

*

i i est i i

Z X X   

 

V V

Metti5 – Roscoff June 13-18, 2011 20

1,.. 1,.., i r i r n i

  

slide-21
SLIDE 21

Stability condition and regularized solutions‐ Matrix A Squared

  • The matrix A is squared and non singular, m = n
  • The data are corrupted by an additive noise :

Th t i Wt i th l th

ex

Y Y Y   

m

Y   

R

Y Z     

  • The matrix Wt is orthogonal, then
  • The error corrupts only the component k,

m m

Y Z     

R R t k

Z Y W      W

The quasi‐solution is

1,.. i i i n i

Z X 

  V

And the error generated is

k k

X V    

the relative error variation is then

1

n n

k

U X Y Z       

R R

Metti5 – Roscoff June 13-18, 2011 21

m m

k

Y Z   

R R

slide-22
SLIDE 22

This result means that an error 

Stability condition and regularized solutions‐ Matrix A Squared

This result means that an error

  • n the component of the data

n

Z

creates a perturbation on the solution which is times greater than the same error

1 n

 

  • n the component

This ratio is called cond(A) the condition number of the matrix A.

1

Z

( ) In practice, a large value for this ratio means

  • that the solution will be very sensitive to the possible data errors

that the solution will be very sensitive to the possible data errors

  • and that some regularization process has to be performed

Metti5 – Roscoff June 13-18, 2011 22

slide-23
SLIDE 23

Stability condition and regularized solutions‐ Matrix A Rectangular g The sensitivity of the quasi‐solutions to output data errors is h i d b h di i b f h i i characterized by the condition number of the matrix equation

 

t t

S S X S Z       

and m n r n  

When

( )

t

cond S S

2 2 1 r n

  

=

Metti5 – Roscoff June 13-18, 2011 23

slide-24
SLIDE 24

Third Part: Regularization processes and Stability condition

G l id General ideas There are several ways to regularize the inversion process, i.e. to make the quasi‐solution less sensitive to the data errors and to satisfy the stability condition. All of them consists in adding some a priori information on the All of them consists in adding some a priori information on the solution to be determined Two great approaches are briefly presented and illustrated: Two great approaches are briefly presented and illustrated:

  • One approach consists in searching for a quasi‐solution which

i fi i i i satisfies some a priori constraints

  • An other approach is based on the “penalization” of the L‐S

Metti5 – Roscoff June 13-18, 2011 24

criterion

slide-25
SLIDE 25

Different possibilities are available for defining these t i t th t l constraints, the most usuals are

  • the truncation of the basis

given by the SVD of the matrix A

 

*

, 1,..

n i

V i r   R

1,.. i i i n i

Z X 

  V

*,

n

i i

Z W Y 

R

  • the parametrization of the solution

1, i i i p

U U 

 

 

Where the set of basis vectors Is a priori given

 

, 1,..

p i

i p    R

Metti5 – Roscoff June 13-18, 2011 25

slide-26
SLIDE 26

Regularization by truncation

The idea is to constrain the quasi‐solution to belong to the sub‐space

 

, 1,..

n i

V i p     R

The “regularizing parameter” is then the order of the truncation for which the condition number will become “acceptable”.

 

p n 

p In practice, this truncation means that the components of the data Y corresponding to the vectors

 

, 1,..

m i

W i p r    R

corresponding to the vectors Are “eliminated” ! and the regularized solution is:

 

, ,

i

p

,

i

Z W X   V

and the regularized solution is: C tl th i i f th difi d d t i t d f th i i l

1,.. c i i p i

X 

  V

Consequently, the inversion of the modified data instead of the original one will introduce a bias, i.e. a systematic error on the computed solution.

Metti5 – Roscoff June 13-18, 2011 26

slide-27
SLIDE 27

Regularization by truncation_ Numerical example

         9 10 6 8 5 6 5 7 7 8 7 10 A

3 9841 . 2 0102 2887 . 30 ) ( e A cond  

      10 9 5 7 9 10 6 8

t t

0102 .

t

Y 31 33 23 32 

t

Y 1 . 1 . 1 . 1 .    

1

1 1 1 1

t

U A Y

 

1

8.2 13.6 3.5 2.1

t

U A Y  

   

1 1

2460.6 U Y G U Y  

         

Metti5 – Roscoff June 13-18, 2011 27

slide-28
SLIDE 28

The regularized solutions build by truncation

32.0000 23.0000 33.0000 32.1000 22.9000 33.1000 32.0302 23.0156 33.0710 32.0201 23.0187 33.0964 31.7910 22.8711 33.1976 31.0000 31.1000 31.1172 31.0982 31.3313 n - p 1 2 3 n p 1 2 3 0.2000 0.1412 0.1403 0.4573

p exact

Y Y  1.0000 1.0000 1.0000 1.0000 8.0000

  • 10.600

3.9000

  • 0.7000

1.1209 0.7897 1.0397 0.9965 1.1090 0.7934 1.0698 0.9740 1.0496 0.7551 1.0960 1.0344 n - p 1 2 3 13.9592 0.2459 0.2452 0.2699

p exact

U U 

Regularized solutions computed for p=1,2 are quite acceptable (In practice, this approach is not available, the exact solution is unknown)

Metti5 – Roscoff June 13-18, 2011 28

( )

slide-29
SLIDE 29

Reg lari ation b parametri ation Regularization by parametrization

The quasi‐solution is constrained to belong to a subspace, which is a priori given p g Application to the 2‐D inverse heat source problem

      

1 (

* ) Y C A B T B q

  

( )

n n

R

      

1 4

( * )

w

Y C A B T B q    

1

( )

n n i i

q q y

 R

( ) ( )

p j j

q y U y  

1 j

is a set of p piecewise linear functions ( h t f ti ) th b d

 

0 1 

 

, 1,..

i i

p  

(« hat functions ») on the boundary such that

( )

j k jk

y   

 

4

0,1  

then

( ), 1,.., ; 1,..,

ij j i

M y i n j p q MU      

Metti5 – Roscoff June 13-18, 2011 29

slide-30
SLIDE 30

Now the ill‐conditioness of the linear inverse problem is characterized by the condition number of the new sensitivity equation : y y q

        

1 4

Y C A B M U X M U   

 

        

4

C U U   

The previous condition number

 

t

cond X X

Becomes The temperature field, solution of the direct problem, is

 

 

t t

cond M X XM

computed on a m x m spatial grid, size(Y) = m , size ([X])= m x n, Size(q) =n and n = m – 1 Size(q) =n, and n = m 1

(m nodes and n = m‐1 intervals)

Metti5 – Roscoff June 13-18, 2011 30

slide-31
SLIDE 31

m 5 8 11 16 21 26 cond(X'*X)

9.2987e+004 5.0208e+009 2.3942e+014 1.1284e+019 4.7040e+020 8.1567e+020

Condition number before parametrization – Influence of the grid size m x m m 16 21 26 m 16 21 26 p=5 yp=[0; 0.25; 0.5; 0.75; 1.0]

7.5950e+008 6.5797e+008 5.8852e+008

p=4 yp=[0;0.3;0.6;1.0]

1.6898e+006 1.6087e+006 1.5596e+006

 

Influence of p on the new condition number after parametrization becomes independent of the mesh size

 

t t

cond M X XM

 

t t

cond M X XM

p Some desired accuracy of the numerical solution of the direct problem can be reached without increasing the unstability of the inverse problem solution

 

cond M X XM

Metti5 – Roscoff June 13-18, 2011 31

the unstability of the inverse problem solution

slide-32
SLIDE 32

Regularization by penalization Th id f h l i i b “ li i ” i i The idea of the regularization process by “penalization” consists in considering a new L‐S criterion which includes the a priori estimate

est

X

 

and a positive parameter so‐called regularization parameter

 

0,1  

2 2

( ) (1 ) J S Z X           ( ) (1 )

m n

est

J S Z X

 

    

R R

the “regularized” quasi‐solution , and its components are

1 *

(1 ) (1 )

t t est

X S S I S Z X

   

            

, * 2

(1 ) , 1,.., (1 )

i i i est i i

Z X X i n            

* 2

(1 ) , 1,.., (1 )

i i i i

X Z i n             

Metti5 – Roscoff June 13-18, 2011 32

slide-33
SLIDE 33

Regularization by penalization_ Numerical example

      5 6 5 7 7 8 7 10 A

10

6

10

7

Influence of the regularing parameter mu on the condition number

         10 9 5 7 9 10 6 8 A

10

4

10

5

e r

3 9841 . 2 0102 . 2887 . 30 ) ( e A cond  

2

10

3

10 c o n d itio n n u m b

the new condition number

(1 )

t

cond S S I        

10

1

10

2

Is a decreasing function of the regularizing parameter

( )    

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 parameter mu

g g p

Metti5 – Roscoff June 13-18, 2011 33

slide-34
SLIDE 34

Optimal value of the regularizing parameter – Numerical example

2 2

( ) J U Y AU U

 

   

est

U 

p g g p p

t

Y 31 33 23 32 

t

Y 1 . 1 . 1 . 1 .     2 .    Y

10

2

penalized LS-criterion - Influence of the scalar mu

Plot of the norm of the

10

1

e error estimate

error estimate Versus the regularization

dU

10 norm of the

parameter The best value

1     

5 10 15 20 25 30 35 40 10

  • 1

mu

e best a ue

   

ˆ 1,3 0.50, 0.75      

Metti5 – Roscoff June 13-18, 2011 34

slide-35
SLIDE 35

Optimal value of the regularizing parameter – Numerical example p g g p p The L curve method (Hansen, 1992)

2 ,

AU Y

  

2 ,

U  The under‐regularized region 00010 0.0159 5.5950 0.0016 0.0171 4.6548 0.0025 0.0179 4.2460 0.0040 0.0184 4.0737 0.0063 0.0188 4.0025 0.0100 0.0190 3.9735 0.0158 0.0191 3.9617 0.0251 0.0192 3.9569 0 0398 0 0193 3 9548 0.0398 0.0193 3.9548 0.0631 0.0193 3.9537 the corner region 0.1000 0.0194 3.9529 0.1585 0.0195 3.9521 0.2512 0.0197 3.9509 0.3981 0.0203 3.9493 0.6310 0.0216 3.9468 1.0000 0.0246 3.9431 the over‐regularized region g g 1.5849 0.0320 3.9374 2.5119 0.0499 3.9286 3.9811 0.0939 3.9150 6.3096 0.2014 3.8941 10 0000 0 4634 3 8619

The L‐curve corner where the curvature of the log‐log plot is maximised is readily visible. It does not correspond to one specific value of the regularizing b i l

Metti5 – Roscoff June 13-18, 2011 35

10.0000 0.4634 3.8619

parameter but to some interval

slide-36
SLIDE 36

Optimal value of the regularizing parameter – Numerical example p g g p p The discrepancy principle (Morozov‐ 1984; Alifanov, 1994)

2 2 *

( ) arg min ( ) S X Z SX X X S X       ( ) arg min ( ) S X Z SX X X S X

  

    

The components of the regularized solutions are

, 2

*

i i i

Z X

    

Let us choose the parameter such that the LS‐residual is equal to the noise level

i

  

2 * 2

Z SX    

q Then it is solution of the algebraic equation

2 2

( )

i

Z           

Then it is solution of the algebraic equation With MATLAB, we get: Zero found in the interval: [0 54745 1 4525]

2

( )

i i

         

Zero found in the interval: [0.54745, 1.4525]. roopt = 1.4065 Uopt = [ 1.1096 0.8013 1.0849 0.9391 ]

Metti5 – Roscoff June 13-18, 2011 36

slide-37
SLIDE 37

Iterative Regularization Th di i i l d th j t di t l ith The discrepancy principle and the conjugate gradient algorithm

For large scale linear inverse problems, or non linear problems, For large scale linear inverse problems, or non linear problems, direct computation of the SVD can become non efficient or impracticable h h d i i i h i i

2

The method is iterative ‐ at each iteration: 1. Compute the gradient

2

( ) ( ) 2 , 0,1,..

n n n t n

J U Y AU J U A Y AU n

 

          

2. Determine a new approximation of the solution

1 1

( )

n n n n n n n n

U U d d d  

 

( ) *= argmin

n n

J U d

 

1 2 2 1

( ) 0; ( ) ( )

n n n n n n n

d J U d J U J U   

 

      

3. Use the discrepancy principle as a stopping rule The last iteration index nf is the regularizing parameter

2 2

( )

nf nf

J U Y AU

   

Metti5 – Roscoff June 13-18, 2011 37

slide-38
SLIDE 38

Conjugate gradient algorithm– Numerical example Exact data

Iteration i 0 1 2 3 4 5 6 Iterative solution 0 1.0486 1.1179 1.1224 1.0755 1.0000 1.0000 (Conjugate gradient algorithm) 0 0.7543 0.7987 0.7974 0.8749 1.0000 1.0000 0 1.0933 1.0622 1.0509 1.0314 1.0000 1.0000 0 1.0312 0.9613 0.9698 0.9814 1.0000 1.0000

1.2 1.4 Quadratic LS criterion- Vector data without noise 10

5

Quadratic LS-criterion - Vector data without noise 0.8 1 ents of B 10

  • 5

10 riterion 0 2 0.4 0.6 compone 10

  • 15

10

  • 10

LS-c 1 2 3 4 5 6 7 8 9 10 11 0.2 iteration number 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 10

  • 20

Iteration number

Metti5 – Roscoff June 13-18, 2011 38

slide-39
SLIDE 39

Conjugate gradient algorithm– Numerical example Iterative regularization ‐ Noisy data

2 2

4.10 

g y

5 10 Quadratic LS criterion- with an additive noise on the data 10 10

5

Quadratic LS-criterion- with additive moise on the data

  • 5

components of B 10

  • 10

10

  • 5

LS-criterion 1 2 3 4 5 6 7

  • 15
  • 10

c 10

  • 20

10

  • 15

1 2 3 4 5 6 7 iteration number 1 2 3 4 5 6 10 iteration number

Iteration number LS ‐ criterion

Regularized solution at nf = 3

1 2 1.802620000000000e+003 1.365119982194983e‐001 1 634291561475751e 002

(3)

1.0997 0.8117 U  at nf = 3

2 3 4 5 1.634291561475751e‐002 1.343557552068297e‐002 1.690700995036237e‐005 3.210230589037177e‐018

1.1319 0.8977 U

Metti5 – Roscoff June 13-18, 2011 39

slide-40
SLIDE 40

Conjugate gradient algorithm– Numerical example Regularization by penalization ‐ Noisy data g y p y

1 2 1.4 Quadratic regularized LS-criterion; muopt=1.4

3

10

4

0 6 5 ) 0.8 1 1.2 ts of B

1

10

2

10

3

a t io n t e rm ro = 1 . 4 0.4 0.6 component

  • 1

10 10

1

rio n (w it h p e n a liz a 1 2 3 4 5 6 7 0.2 iteration number 1 2 3 4 5 6 7 8 9 10 10

  • 2

10

1

iteration number L S c rit e

0 0.9584 1.0018 1.0054 1.0382 1.0485 1.0496 0 0.6894 0.7206 0.7231 0.7449 0.7515 0.7523 0.9991 1.0429 1.0448 1.0348 1.0265 1.0268

2 2

( ) 1 4065 S U Y SU U

 

     

 

0 0.9991 1.0429 1.0448 1.0348 1.0265 1.0268 0 0.9422 0.9827 0.9836 0.9531 0.9361 0.9358

1.4065  

Metti5 – Roscoff June 13-18, 2011 40

slide-41
SLIDE 41

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data g y The model equation (time convolution)

( ) ( )( ) ( ) ( )

t

Y A U f U d ( )

0

2

( ) ( )( ) ( ) ( ) , ( ) ( )

f s

Y t A U t f t U d t ( ,t ) x f t K         

( ) exp( ),

s

f t K t t     

The LS‐criterion

 

2

2 2

1 1 ( ) ( ; ) ( ) ( ) 2 2

f

t

J U Y t U Y t dt Y U Y

 

   

L

The gradient computed with an adjoint variable

( ; ) ( ; ) ( )

f

t

J t U x U f x t dx t t      

 

( ; ) ( ; ) ( ) , 0 ( ) ( ; ) ( )

f t

J t U x U f x t dx t t x Y x U Y x

       

Metti5 – Roscoff June 13-18, 2011 41

slide-42
SLIDE 42

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data g y Numerical results

nf t t t k t

f k

    ,

1,.., k nf 

The solution is computed over the time interval

 

1 nf k k

U U

1

( ) ,

k k k k i i i nf

Y Y t t f U

 

   

p An normally distributed noise is added

 

5 , 

f

t

0.1, 51 t nf   

1

( )

f k k i k i i k

J J t t f 

  

     

An normally distributed noise is added to get the output noisy data

Two numerical experiments are performed with

Y

1 1 ( ) exp( )

n n n n

f f t t t   

Two numerical experiments are performed with different noise level.

The initial guess :

(0)

0, 1..

k

U k nf  

g

k

f

Metti5 – Roscoff June 13-18, 2011 42

slide-43
SLIDE 43

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data g y Numerical results: too much iterations give unstables solution !

10 15 10

4

10

5

5 10

2

10

3

  • 5
  • 1

10 10

1

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

  • 10

2 4 6 8 10 12 14 16 18 20 10

1

Computed (unstable ) Solution at iteration nf = 20 Computed (unstable ) Solution at iteration nf = 20

12 . 2

2 2

  Y  

Metti5 – Roscoff June 13-18, 2011 43

slide-44
SLIDE 44

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data ‐ Numerical results

12 . 2

2 2

  Y  

.

8 10 Inverse Input Problem- Integral Equation Model 40 45 Inverse Input Problem- Integral Equation Model

g y

2 4 6 eat flux 20 25 30 35 tput

  • 6
  • 4
  • 2

input he 5 10 15

  • ut

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

  • 10
  • 8

time 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

  • 5

time

Exact input and computed solution ft 5 it ti Output data and computed output ft 5 it ti after 5 iterations after 5 iterations

Metti5 – Roscoff June 13-18, 2011 44

slide-45
SLIDE 45

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data g y Numerical results: too much iterations give unstables solution !

10 15 10

4

10

5

5

2

10

3

  • 10
  • 5

10

1

10

2

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

  • 15

2 4 6 8 10 12 14 16 18 20 10

Computed (unstable ) Solution at iteration nf = 20 Computed (unstable ) Solution at iteration nf = 20

2 2

10.39 Y    

Metti5 – Roscoff June 13-18, 2011 45

slide-46
SLIDE 46

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data g y Numerical results: The LS‐criterion versus the iteration number

10

5

Conjugate Gradient Algorithm - HeatSource problem

10

5

10

3

10

4

criterion

10

3

10

4

10

1

10

2

LS-c

10

2

10

1 2 3 4 5 6 10 iteration number

1 1.5 2 2.5 3 3.5 4 4.5 5 10 10

1

2 2

10.39 Y    

12 . 2

2 2

  Y  

Metti5 – Roscoff June 13-18, 2011 46

slide-47
SLIDE 47

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data

2 2

10 39 Y    

g y Numerical results

10.39 Y    

.

6 8 10 Inverse Input Problem - Integral Equation Model noise level=54 35 40 45 Inverse Input Problem - Integral Equation Model 2 4 put heat flux 20 25 30

  • utput
  • 8
  • 6
  • 4
  • 2

Inp 5 10 15

Exact input and computed solution Output data and computed output

0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

  • 10

time 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

  • 5

time

after 4 iterations after 4 iterations

Metti5 – Roscoff June 13-18, 2011 47

slide-48
SLIDE 48

Conjugate gradient algorithm– The heat source problem Iterative regularization ‐ Noisy data g y The numerical resolution of this linear inverse problem

  • shows how the computation of the gradient of the LS criterion allows to

construct regularized solution of an inverse input problem (formulated here construct regularized solution of an inverse input problem (formulated here with an integral equation)

  • shows how the discrepancy principle is an efficient way to avoid the

shows how the discrepancy principle is an efficient way to avoid the amplification of the data errors, and it is easy to implement with the CG algorithm algorithm

  • confirms that the last iteration index is the regularizing parameter, and how

l d d h l l f h d b d its value depends on the noise level of the data to be inverted

Metti5 – Roscoff June 13-18, 2011 48

slide-49
SLIDE 49

Conclusions

1. A key feature of inverse problems is their ill‐posedness. Construction of quasi‐solutions by minimizing an output least square criterion is a quas so ut o s by g a output east squa e c te o s a general approach for solving. 2. Basic algebra results led to efficient algorithms for the computation of regularized solutions They are based on the SVD analysis of the linear regularized solutions. They are based on the SVD analysis of the linear

  • perator.

3. To make the quasi‐solutions less sensitive to the data errors and to satisfy the stability condition some adding a priori information on the satisfy the stability condition, some adding a priori information, on the solution to be determined, has to be considered. 4. Two basic approaches were briefly presented and illustrated: a)quasi‐ l h h f b) b d h solutions which satisfy some a priori constraints, or b) based on the “penalization” of the L‐S criterion. 5. The conjugate gradient algorithm to be known among the most effective method to compute regularized solution according to the discrepancy principle, was illustrated

Metti5 – Roscoff June 13-18, 2011 49

slide-50
SLIDE 50

References

[1] J.V. Beck, B Blackwell, C.R. St. Clair, Inverse Heat Conduction- Ill-posed Problems, Wiley Interscience, New-York, 1985. [2] O. Alifanov Inverse Heat Transfer Problems, Springer Verlag,Berlin, (1994) [3]D.A. Murio, The Molification method and the numerical solution of ill-posed problems, John Wiley, New York, 1993 [4]O M Alif E A A t khi S V R t E t th d f l i Ill bl ith [4]O.M. Alifanov, E.A.Artyukhin, S.V. Rumyantsev, Extreme methods for solving Ill-pose problems with applications to inverse Heat transfer Problems, Begell House Inc., New-York, 1995 [5]A.N. Tikhonov, V.Ya. Arsenin, Solutions of ill-posed Problems, V.H. Winston and Sons, Washington, D C 1977 D.C., 1977 [6]V.A. Morozov, Methods for solving Incorrectly Posed Problems, Springer Verlag, New York, 1984 [7]K A W db Edit I E i i H db k CRC P B R t (2003) [7]K .A. Woodbury Editor, Inverse Engineering Handbook, CRC Press, Boca Raton, (2003) [8]Y Jarny, Inverse Heat Transfer Problems and Thermal Characterization of Materials, Proc of 4th ICIPE Conf., 1, 23-48, ISBN n° 85-87922-43-2 (2002) [9]P.J. Mc Carthy, Direct Analytic Model of the L-curve for Tikhonov regularization parameter selection, Inverse Problems 19, (2003) 643-663 Metti5 – Roscoff June 13-18, 2011 50

slide-51
SLIDE 51

Experiments Inverse data processing Control Experimental design Mathematical modelling design

Metti5 – Roscoff June 13-18, 2011 51