extension of the adjoint method
play

Extension of the adjoint method Stanislas Larnier Institut de - PowerPoint PPT Presentation

Extension of the adjoint method Extension of the adjoint method Stanislas Larnier Institut de Mathmatiques de Toulouse Universit Paul Sabatier, France PICOF12 April 4, 2012 1 / 27 Extension of the adjoint method Introduction 1


  1. Extension of the adjoint method Extension of the adjoint method Stanislas Larnier Institut de Mathématiques de Toulouse Université Paul Sabatier, France PICOF’12 April 4, 2012 1 / 27

  2. Extension of the adjoint method Introduction 1 Adjoint method 2 Theoretical part 3 Numerical results 4 Conclusions and perspectives 5 2 / 27

  3. Extension of the adjoint method Introduction Introduction 1 Adjoint method 2 Theoretical part 3 Numerical results 4 Conclusions and perspectives 5 3 / 27

  4. Extension of the adjoint method Introduction Topology optimization formulates a design problem as an optimal material distribution problem. The search of an optimal domain is equivalent to finding its characteristic function, it is a 0-1 optimization problem. Different approaches make this problem differentiable: Relaxation, homogenization, Level set, Topological derivatives. 4 / 27

  5. Extension of the adjoint method Introduction To present the basic idea, let Ω be a domain of R d , d ∈ N \{ 0 } and j (Ω) = J ( u Ω ) a cost function to be minimized, where u Ω is a solution to a given partial differential equation defined in Ω . Let x be a point in Ω and ω 1 a smooth open bounded subset in R d containing the origin. For a small parameter ρ > 0, let Ω \ ω ρ be the perturbed domain obtained by making a perforation ω ρ = ρω 1 around the point x . The topological asymptotic expansion of j (Ω \ ω ρ ) when ρ tends to zero is the following: j (Ω \ ω ρ ) = j (Ω) + f ( ρ ) g ( x ) + o ( f ( ρ )) . where f ( ρ ) denotes an explicit positive function going to zero with ρ and g ( x ) is called the topological gradient or topological derivative. It is usually simple to compute and is obtained using the solution of direct and adjoint problems defined on the initial domain. 5 / 27

  6. Extension of the adjoint method Introduction In topology optimization, there are some drawbacks of topological derivatives approaches: The asymptotic topological expansion is not easy to obtain for complex problems. It needs to be adapted for many particular cases such as the creation of a hole on the boundary of an existing one or on the original boundary of the domain. It is difficult to determine the variation of a cost function when a hole is to be filled. In real applications of topology optimization, a finite perturbation is performed and not an infinitesimal one. 6 / 27

  7. Extension of the adjoint method Adjoint method Introduction 1 Adjoint method 2 Theoretical part 3 Numerical results 4 Conclusions and perspectives 5 7 / 27

  8. Extension of the adjoint method Adjoint method Consider the following steady state equation F ( c , u ) = 0 in Ω , where c is a distributed parameter in a domain Ω . The aim is to minimize a cost function j ( c ) := J ( u c ) where u c is the solution of the direct equation for a given c . Let us suppose that every term is differentiable. We are considering a perturbation δ c of the parameter c . The direct equation can be seen as a constraint, and as a consequence, the Lagrangian is considered: L ( c , u , p ) = J ( u ) + ( F ( c , u ) , p ) , where p is a Lagrange multiplier and ( · , · ) denotes the scalar product in a well-chosen Hilbert space. 8 / 27

  9. Extension of the adjoint method Adjoint method To compute the derivative of j , one can remark that j ( c ) = L ( c , u c , p ) for all c , if u c is the solution of the direct equation. The derivative of j is then equal to the derivative of L with respect to c : d c j ( c ) δ c = ∂ c L ( c , u c , p ) δ c + ∂ u L ( c , u c , p ) ∂ c u δ c . All these terms can be calculated easily, except ∂ c u δ c , the solution of the linearized problem: ∂ u F ( c , u c )( ∂ c u δ c ) = − ∂ c F ( c , u c ) δ c . To avoid the resolution of this equation for each δ c , the term ∂ u L ( c , u c , p ) is cancelled by solving the following adjoint equation. Let p c be the solution of the adjoint equation: ∂ u F ( c , u c ) T p c = − ∂ u J T . So the derivative of j is explicitly given by d c j ( c ) δ c = ∂ c L ( c , u c , p c ) δ c . 9 / 27

  10. Extension of the adjoint method Adjoint method Note that if the Lagrangians L ( c + δ c , . . . , . . . ) and L ( c , . . . , . . . ) are defined on the same space, we have j ( c + δ c ) − j ( c ) = L ( c + δ c , u c + δ c , p c ) − L ( c , u c , p c ) =( L ( c + δ c , u c + δ c , p c ) − L ( c + δ c , u c , p c )) + ( L ( c + δ c , u c , p c ) − L ( c , u c , p c )) . In the case of a regular perturbation δ c , the second term gives the main variation and the first term is of higher order. In the case of a singular perturbation, the first term is of the same order as the second one and cannot be ignored. Then the variation of u c has to be estimated. The basic idea of the numerical vault is to update the solution u c by solving a local problem defined in a small domain around x 0 . 10 / 27

  11. Extension of the adjoint method Theoretical part Introduction 1 Adjoint method 2 Theoretical part 3 Numerical results 4 Conclusions and perspectives 5 11 / 27

  12. Extension of the adjoint method Theoretical part In the linear case is studied, consider the variational problem depending on a parameter ε : a ε ( u ε , v ) = ℓ ε ( v ) ∀ v ∈ V ε , where V ε is a Hilbert space, a ε is a bilinear, continuous and coercive form and ℓ ε is a linear and continuous form. Typically, V ε est tel que H 1 0 ⊂ V ε ⊂ H 1 . The aim is to minimize a cost function which depends of ε . j ( ε ) := J ε ( u ε ) . The cost function J ε is of class C 1 , the adjoint problem associated to the problem is a ε ( w , p ε ) = − ∂ u J ε ( u ε ) w ∀ w ∈ V ε , where p ε is the solution of this problem. 12 / 27

  13. Extension of the adjoint method Theoretical part Suppose that a ε , ℓ ε and J ε are integrals over a domain Ω . The domain Ω is split into two parts, a part D containing the perturbation, and its complementary Ω 0 = Ω \ D . The forms a ε , ℓ ε , and the cost function J ε are decomposed in the following way: a ε = a Ω 0 + a ε D , ℓ ε = ℓ Ω 0 + ℓ ε D , J ε = J Ω 0 + J ε D , where a Ω 0 , l Ω 0 et J Ω 0 are independants of ε . V Ω 0 , the space consisting of functions of V ε and V 0 restricted to Ω 0 , D , the space consisting of functions of V ε restricted to D , V ε D , the space consisting of functions of V 0 restricted to D , V 0 13 / 27

  14. Extension of the adjoint method Theoretical part We assume that V 0 ⊂ V ε . D , the local update of u 0 : Let us consider u ε  Find u ε D ∈ V ε D solution of  a ε D ( u ε D , v ) = ℓ ε D ( v ) , ∀ v ∈ V ε D , 0 , u 0 = u ε on ∂ D .  D The update of u 0 , named ˜ u ε , is given by:  u ε in D , D u ε =  ˜ u 0 in Ω 0 .  14 / 27

  15. Extension of the adjoint method Theoretical part Hypotheses: There exist three positive constants η , C and C u independent of ε and a positive real valued function f defined on R + such that ε → 0 f ( ε ) = 0 , lim � J ε ( v ) − J ε ( u ) − ∂ u J ε ( u )( v − u ) � V ε ≤ C � v − u � 2 V ε , ∀ v , u ∈ B ( u 0 , η ) , � u ε − u 0 � V Ω 0 ≤ C u f ( ε ) , ε → 0 � p ε − p 0 � V ε = 0 . lim Proposition Under these hypotheses, we have � u ε − ˜ u ε � V ε = O ( f ( ε )) . Theorem (Update of the direct solution) Under these hypotheses, we have j ( ε ) − j ( 0 ) = L ε (˜ u ε , p 0 ) − L 0 ( u 0 , p 0 ) + o ( f ( ε )) . 15 / 27

  16. Extension of the adjoint method Theoretical part V 0 is not necessary a sub-space of V ε . u ε stays the same. The definition of ˜ D , the local update of p 0 Let us consider p ε  Find p ε D ∈ V ε D solution of  a ε D ( w , p ε D ) = − ∂ u J ε D ( u ε D ) w , ∀ w ∈ V ε D , 0 , p 0 p ε = on ∂ D .  D The update of p 0 , named ˜ p ε , is given by:  p ε in D , D p ε =  ˜ p 0 in Ω 0 .  16 / 27

  17. Extension of the adjoint method Theoretical part Hypotheses: There exist four positive constants η , C , C u and C p independent of ε and a positive real valued function f defined on R + such that ε → 0 f ( ε ) = 0 , lim � J ε ( v ) − J ε ( u ) − ∂ u J ε ( u )( v − u ) � V ε ≤ C � v − u � 2 V ε , ∀ v , u ∈ B ( u 0 , η ) , � u ε − u 0 � V Ω 0 ≤ C u f ( ε ) , � p ε − p 0 � V Ω 0 ≤ C p f ( ε ) . Proposition Under these hypotheses, we have � p ε − ˜ p ε � V ε = O ( f ( ε )) . Theorem (Update of the direct and adjoint solutions) Under these hypotheses, we have p ε ) − L 0 ( u 0 , p 0 ) + O ( f ( ε ) 2 ) . j ( ε ) − j ( 0 ) = L ε (˜ u ε , ˜ 17 / 27

  18. Extension of the adjoint method Numerical results Introduction 1 Adjoint method 2 Theoretical part 3 Numerical results 4 Conclusions and perspectives 5 18 / 27

  19. Extension of the adjoint method Numerical results Let Ω be a rectangular bounded domain of R 2 and Γ be its boundary, composed of two parts Γ 1 and Γ 2 . The points of the rectangle are submitted to a vertical displacement u solution of the following equation:  −∇ .σ ( u ) = 0 in Ω ,   u = 0 on Γ 1 ,  σ ( u ) n = µ on Γ 2 ,  2 ( Du + Du T ) , σ ( u ) = hH 0 φ ( u ) , σ ( u ) is the stress distribution, H 0 is the where φ ( u ) = 1 Hooke tensor and h ( x ) represents the material stiffness. The optimization problem is to minimize the following cost function: � j ( h ) = g . u dx , Γ 2 19 / 27

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend