non linear least squares
play

Non linear Least Squares Lectures for PHD course on Numerical - PowerPoint PPT Presentation

Non linear Least Squares Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS Universit a di Trento November 21 December 14, 2011 Non linear Least Squares 1 / 18 The Nonlinear Least Squares Problem Outline


  1. Non linear Least Squares Lectures for PHD course on Numerical optimization Enrico Bertolazzi DIMS – Universit´ a di Trento November 21 – December 14, 2011 Non linear Least Squares 1 / 18

  2. The Nonlinear Least Squares Problem Outline The Nonlinear Least Squares Problem 1 The Levemberg–Marquardt step 2 The Dog-Leg step 3 Non linear Least Squares 2 / 18

  3. The Nonlinear Least Squares Problem Introduction An important class on minimization problem when f : ❘ n �→ ❘ is the nonlinear least squares and takes the form: m f ( x ) = 1 � F i ( x ) 2 , m ≥ n 2 i =1 When n = m finding the minimum coincide to finding the solution of the non linear system F ( x ) = 0 where: � T � F ( x ) = F 1 ( x ) , F 2 ( x ) , . . . , F n ( x ) Thus, special methods developed for the solution of nonlinear least squares can be used for the solution of nonlinear systems, but not the converse if m > n . Non linear Least Squares 3 / 18

  4. The Nonlinear Least Squares Problem Introduction Example Consider the the following fitting model M ( x , t ) = x 3 exp( x 1 t ) + x 4 exp( x 3 t ) which can be used to fit some data. The model depend on the parameters x = ( x 1 , x 2 , x 3 , x 4 ) T . If we have a number of points ( t k , y k ) T , k = 1 , 2 , . . . , m we want to find the parameters x such that k =1 ( M ( x , t k ) − y k ) 2 is minimum. Defining � m 1 2 F k ( x ) = M ( x , t k ) − y k , k = 1 , 2 , . . . , m then can be viewed as a non linear least squares problem. Non linear Least Squares 4 / 18

  5. The Nonlinear Least Squares Problem Introduction To solve nonlinear least squares problem, we can use any of the previously discussed method. For example BFGS or Newton method with globalization techniques. If for example we use Newton method we need to compute m m ∇ 2 f ( x ) = ∇ 2 1 F i ( x ) 2 = 1 � � ∇ 2 F i ( x ) 2 2 2 i =1 i =1 m = 1 � ∇ (2 F i ( x ) ∇ F i ( x )) T 2 i =1 m m ∇ F i ( x ) T ∇ F i ( x ) + � � F i ( x ) ∇ 2 F i ( x ) = i =1 i =1 Non linear Least Squares 5 / 18

  6. The Nonlinear Least Squares Problem Introduction If we define   ∇ F 1 ( x ) ∇ F 2 ( x )   J ( x ) =  .  .   .   ∇ F m ( x ) then we can write m ∇ 2 f ( x ) = J ( x ) T J ( x ) + � F i ( x ) ∇ 2 F i ( x ) i =1 However, in practical problem normally J ( x ) is known, while ∇ 2 F i ( x ) is not known or impractical to compute. Non linear Least Squares 6 / 18

  7. The Nonlinear Least Squares Problem Introduction A common approximation is given by neglecting the terms ∇ 2 F i ( x ) obtaining, ∇ 2 f ( x ) ≈ J ( x ) T J ( x ) This choice can be appropriate near the solution if n = m in solving nonlinear system. In fact near the solution we have F i ( x ) ≈ 0 so that the contribution of the neglected term is small. This choice is not good when near the minimum we have large residual (i.e. � F ( x ) � is large) because the contribution of ∇ 2 F i ( x ) cant be neglected. Non linear Least Squares 7 / 18

  8. The Nonlinear Least Squares Problem Introduction From previous consideration applying Newton method to ∇ f ( x ) T = 0 , we have x k +1 = x k − ∇ 2 f ( x k ) − 1 ∇ f ( x k ) T and when f ( x ) = 1 2 � F ( x ) � 2 : ∇ f ( x ) T = J ( x ) F ( x ) m ∇ 2 f ( x ) = J ( x ) T J ( x ) + F i ( x ) ∇ 2 F i ( x ) ≈ J ( x ) T J ( x ) � i =1 And using the last approximation we obtain the Gauss-Newton algorithm. Non linear Least Squares 8 / 18

  9. The Nonlinear Least Squares Problem Introduction Notice that the approximate Newton direction � − 1 � J ( x ) T J ( x ) J ( x ) F ( x ) ≈ −∇ 2 f ( x ) − 1 ∇ f ( x ) T d = − is a descent direction, in fact � − 1 ∇ f ( x ) T < 0 � J ( x ) T J ( x ) ∇ f ( x ) d = −∇ f ( x ) when J ( x ) is full rank. Non linear Least Squares 9 / 18

  10. The Nonlinear Least Squares Problem Introduction Algorithm (Gauss-Newton algorithm) x assigned; f ← F ( x ) ; J ← ∇ F ( x ) � J T f � > ǫ do � � while — compute search direction d ← − ( J T J ) − 1 J T f ; Approximate arg min α> 0 f ( x + α d ) by linsearch; — perform step x ← x + α d ; end while Non linear Least Squares 10 / 18

  11. The Levemberg–Marquardt step Outline The Nonlinear Least Squares Problem 1 The Levemberg–Marquardt step 2 The Dog-Leg step 3 Non linear Least Squares 11 / 18

  12. The Levemberg–Marquardt step The Levenberg–Marquardt Method Levenberg (1944) and later Marquardt (1963) suggested to use a damped Gauss-Newton method: � − 1 � ∇ f ( x ) T , ∇ f ( x ) T = J ( x ) F ( x ) J ( x ) T J ( x ) + µ I d = − 1 for all µ ≥ 0 is a descent direction, in fact � − 1 � ∇ f ( x ) T < 0 J ( x ) T J ( x ) + µ I ∇ f ( x ) d = −∇ f ( x ) µ ∇ f ( x ) T the gradient direction. 2 for large µ we have d ≈ − 1 3 for small µ we have d ≈ − ( J ( x ) T J ( x )) − 1 ∇ f ( x ) T the Gauss-Newton direction Non linear Least Squares 12 / 18

  13. The Levemberg–Marquardt step 1 The choice of parameter µ affect both size and direction of the step 2 Levenberg–Marquardt becomes a method without line-search. 3 As for Trust region each step (approximately) solve the minimization of the model problem min m ( x + s ) = f ( x ) + ∇ f ( x ) s + 1 2 s T H ( x ) s where H ( x ) = J ( x ) T J ( x ) + µ I is symmetric and positive definite (SPD). 4 H ( x ) is SPD and the minimum is s = − H ( x ) − 1 g ( x ) , g ( x ) = ∇ f ( x ) T Non linear Least Squares 13 / 18

  14. The Levemberg–Marquardt step Algorithm (Generic LM algorithm) x , µ assigned; η 1 = 0 . 25 ; η 2 = 0 . 75 ; γ 1 = 2 ; γ 2 = 1 / 3 ; f ← F ( x ) ; J ← ∇ F ( x ) ; while � f � > ǫ do 2 � f � 2 + f T s + 1 2 ( J T J + µ I ) s ; ← arg min m ( x + s ) = 1 s pred ← m ( x + s ) − m ( x ) ; 2 � F ( x + s ) � 2 − 1 2 � f � 2 ; 1 ← ared r ← ( ared / pred ) ; if r < η 1 then x ← x ; µ ← γ 1 µ ; — reject step, enlarge µ else x ← x + s ; — accept step if r > η 2 then µ ← γ 2 µ ; — reduce µ end if end if end while Non linear Least Squares 14 / 18

  15. The Levemberg–Marquardt step Let r the ratio of expected and actual reduction of a step a faster strategy for the µ update is the following Algorithm (Generic LM algorithm) if r > 0 then � 1 � 3 , 1 − (2 r − 1) 3 µ ← µ max ν ← 2 else µ ← µ ν ; ν ← 2 ν ; end if H.B. Nielsen Damping Parameter in Marquardt’s Method IMM, DTU. Report IMM-REP-1999-05, 1999. http://www.imm.dtu.dk/~hbn/publ/TR9905.ps Non linear Least Squares 15 / 18

  16. The Dog-Leg step Outline The Nonlinear Least Squares Problem 1 The Levemberg–Marquardt step 2 The Dog-Leg step 3 Non linear Least Squares 16 / 18

  17. The Dog-Leg step The Dog-Leg step As for the Thrust Region method we have 2 searching direction: One is the Gauss-Newton direction (when µ = 0 ) � − 1 � J ( x ) T J ( x ) ∇ f ( x ) T , ∇ f ( x ) T = J ( x ) F ( x ) d GN = − and the gradient direction (when µ = ∞ ) d SD = −∇ f ( x ) T = − J ( x ) T F ( x ) , to be finished! Non linear Least Squares 17 / 18

  18. The Dog-Leg step References J. Stoer and R. Bulirsch Introduction to numerical analysis Springer-Verlag, Texts in Applied Mathematics, 12 , 2002. J. E. Dennis, Jr. and Robert B. Schnabel Numerical Methods for Unconstrained Optimization and Nonlinear Equations SIAM, Classics in Applied Mathematics, 16 , 1996. Non linear Least Squares 18 / 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend