course on inverse problems
play

Course on Inverse Problems Albert Tarantola Lesson XIX: Fitting - PowerPoint PPT Presentation

Institut de Physique du Globe de Paris & Universit Pierre et Marie Curie (Paris VI) Course on Inverse Problems Albert Tarantola Lesson XIX: Fitting Waveforms (Theory) From medium parameters to wavefield (acoustic example). The mapping,


  1. Institut de Physique du Globe de Paris & Université Pierre et Marie Curie (Paris VI) Course on Inverse Problems Albert Tarantola Lesson XIX: Fitting Waveforms (Theory)

  2. From medium parameters to wavefield (acoustic example). The mapping, the tangent linear mapping, and its transpose. Until otherwise stated, initial conditions of rest are assumed. ∂ 2 p 1 ∂ t 2 ( x , t ) − ∆ p ( x , t ) = S ( x , t ) c 2 ( x ) z ( x ) = log c 2 ( x ) C 2 ∂ 2 p exp ( - z ( x )) ∂ t 2 ( x , t ) − ∆ p ( x , t ) = S ( x , t ) C 2 G = L -1 L [ z ] · p = S ; ; p = G [ z ] · S � � dt ′ G ( x , t , x ′ , t ′ ; z ) S ( x ′ , t ′ ) d x ′ p ( x , t ) = p = G [ z ] · S = ϕ ( z , S )

  3. ϕ ( z 0 + δ z , S ) = ϕ ( z 0 , S ) + Φ 0 δ z + . . . � �� � � �� � � �� � p p 0 δ p � �� � D p ∂ 2 p 0 exp ( - z 0 ( x )) ∂ t 2 ( x , t ) − ∆ p 0 ( x , t ) = S ( x , t ) C 2 ∂ 2 ( p 0 + Dp ) exp ( - ( z 0 ( x ) + δ z ( x ))) ( x , t ) − ∆ ( p 0 + Dp )( x , t ) C 2 ∂ t 2 = S ( x , t ) · · · ∂ 2 Dp exp ( - z 0 ( x )) ∂ t 2 ( x , t ) − ∆ Dp ( x , t ) C 2 = ∂ 2 p 0 ∂ t 2 ( x , t ) exp ( - z 0 ( x )) δ z ( x ) + . . . C 2

  4. ∂ 2 δ p exp ( - z 0 ( x )) ∂ t 2 ( x , t ) − ∆ δ p ( x , t ) C 2 = ∂ 2 p 0 ∂ t 2 ( x , t ) exp ( - z 0 ( x )) δ z ( x ) C 2

  5. ∂ 2 δ p exp ( - z 0 ( x )) ∂ t 2 ( x , t ) − ∆ δ p ( x , t ) = Σ ( x , t ) , C 2 with the “Born secondary source” Σ ( x , t ) = ∂ 2 p 0 ∂ t 2 ( x , t ) exp ( - z 0 ( x )) δ z ( x ) . C 2 Note that the wavefield δ p ( x , t ) propagates in the medium z 0 ( x ) . The interpretation of this is. .. For a fixed source field S = { S ( x , t ) } , the tangent linear map- ping to the mapping p = ϕ ( z , S ) at z 0 = { z 0 ( x ) } is the linear mapping Φ 0 that to any δ z = { δ z ( x ) } associates the δ p = { δ p ( x , t ) } that is the solution to the differential equation above (with initial conditions of rest).

  6. Which is the Fréchet derivative of the mapping (i.e. the inte- gral kernel of the tangent linear mapping just characterized)? � � dt ′ G ( x , t , x ′ , t ′ ; z 0 ) Σ ( x ′ , t ′ ) = d x ′ δ p ( x , t ) = � � ∂ t 2 ( x ′ , t ′ ) exp ( - z 0 ( x ′ )) dt ′ G ( x , t , x ′ , t ′ ; z 0 ) ∂ 2 p 0 d x ′ δ z ( x ′ ) C 2 � �� � Φ 0 ( x , t , x ′ ) i.e., � d x ′ Φ 0 ( x , t , x ′ ) δ z ( x ′ ) δ p ( x , t ) = , so the Fréchet derivative is � ∂ t 2 ( x ′ , t ′ ) exp ( - z 0 ( x ′ )) dt ′ G ( x , t , x ′ , t ′ ; z 0 ) ∂ 2 p 0 Φ 0 ( x , t , x ′ ) = . C 2

  7. We now know the meaning of the expression δ p = Φ 0 δ z . What would be the meaning of an expression like � δ z = Φ t 0 � δ p involving the transpose operator Φ t 0 ? We have two ways: (i) invoking the abstract definition of trans- pose, or (ii) using the property that the kernel of the transpose operator is the same as the kernel of the original operator (but the sum acts on the reciprocal variables). Let us choose the second way. The expression � δ z = Φ t 0 � δ p necessarily corresponds to � � δ z ( x ′ ) = dt Φ 0 ( x , t , x ′ ) � � d x δ p ( x , t ) where Φ 0 ( x , t , x ′ ) is the kernel introduced above.

  8. Explicitly, this gives � � � dt ′ × δ z ( x ′ ) = � d x dt G ( x , t , x ′ , t ′ ; z 0 ) ∂ 2 p 0 ∂ t 2 ( x ′ , t ′ ) exp ( - z 0 ( x ′ )) � δ p ( x , t ) C 2 � = exp ( - z 0 ( x ′ )) dt ′ ∂ 2 p 0 ∂ t 2 ( x ′ , t ′ ) × C 2 � � dt G ( x , t , x ′ , t ′ ; z 0 ) � d x δ p ( x , t ) � �� � π ( x ′ , t ′ ) i.e. (relabeling variables), � dt ∂ 2 p 0 δ z ( x ) = exp ( - z 0 ( x )) � ∂ t 2 ( x , t ) π ( x , t ) , C 2 where � � dt ′ G ( x , t ′ , x ′ , t ; z 0 ) � d x ′ δ p ( x ′ , t ′ ) π ( x , t ) = (note that the times in the Green’s function are reversed).

  9. Note: I have used here the reciprocity property (that I have not demonstrated) G ( x ′ , τ , x , 0; z 0 ) = G ( x , τ , x ′ , 0; z 0 ) (the signal at point x caused by a source at point x ′ is identical to the signal at point x ′ caused by source at point x ). One also has G ( x , t , x ′ , t ′ ; z 0 ) = G ( x , t − t ′ , x ′ , 0; z 0 ) = G ( x , 0, x ′ , t ′ − t ; z 0 ) .

  10. CONCLUSION OF THE LAST LESSON (IN THREE SLIDES): Let p = ϕ ( z , S ) the mapping that to any medium z = { z ( x ) } and any source S = { S ( x , t ) } associates the wavefield p = { p ( x , t ) } defined by the resolution of the acoustic wave equa- tion with Initial Conditions of Rest: ∂ 2 p exp ( - z 0 ( x )) ∂ t 2 ( x , t ) − ∆ p ( x , t ) = S ( x , t ) ; (ICR) . C 2

  11. We saw above the following result: For a fixed source field S = { S ( x , t ) } , the tangent linear mapping to the mapping p = ϕ ( z , S ) at z 0 = { z 0 ( x ) } is the linear mapping Φ 0 that to any δ z = { δ z ( x ) } associates the δ p = { δ p ( x , t ) } that is the solution to the differential equation (with Initial Conditions of Rest) ∂ 2 δ p exp ( - z 0 ( x )) ∂ t 2 ( x , t ) − ∆ δ p ( x , t ) = Σ ( x , t ) ; (ICR) , C 2 where Σ ( x , t ) is the “Born secondary source” Σ ( x , t ) = ∂ 2 p 0 ∂ t 2 ( x , t ) exp ( - z 0 ( x )) δ z ( x ) . C 2

  12. We now have the following result: The transpose mapping Φ t 0 is the linear mapping that to any � δ p = { � δ p ( x , t ) } associates the � δ z = { � δ z ( x ) } defined by the two equations written four slides ago. That is, (i) compute the wavefield π ( x , t ) that is solution to the wave equation, with Final (instead of initial) Conditions of Rest, and whose source function (at the right- hand side of the wave equation) is � δ p ( x , t ) , ∂ 2 π exp ( - z 0 ( x )) ∂ t 2 ( x , t ) − ∆ π ( x , t ) = � δ p ( x , t ) ; (FCR) , C 2 and (ii) evaluate � dt ∂ 2 p 0 δ z ( x ) = exp ( - z 0 ( x )) � ∂ t 2 ( x , t ) π ( x , t ) . C 2

  13. The mapping p = ϕ ( z , S ) depends on the (logarithmic) veloc- ity model z = { z ( x ) } and on the source model S = { S ( x , t ) } . We have examined the dependence of the mapping on z , but not yet on S . But this dependence is linear, so we don’t need to evaluate the tangent linear operator (it is the wave equation operator L itself [associated to initial conditions of rest]). As for the transpose operator L t , it equals L , excepted that it is associated to final conditions of rest.

  14. And let’s now move towards the setting of a least-squares op- timization problem involving waveforms. .. Recall: we had some observable parameters o , and some model parameters m , the forward modeling relation was m �→ o = ψ ( m ) , we introduced the tangent linear operator Ψ via ψ ( m + δ m ) = ψ ( m ) + Ψ δ m + . . . , and we arrived at the iterative (steepest descent) algorithm m k + 1 = m k − µ ( C prior Ψ t k C -1 obs ( ψ ( m k ) − o obs ) + ( m k − m prior ) ) , where the four typical elements m prior , C prior , o obs and C obs of least-squares problem appear.

  15. In a problem involving waveforms, it is better to consider the operator m �→ o = ψ ( m ) , as composed of two parts, an oper- ator m �→ p = ϕ ( m ) , that to any model m associates the wavefield p , and an oper- ator p �→ o = γ ( p ) that to the wavefield p associates the observables o . The mapping p = ϕ ( m ) has been extensively studied in the last lesson (and its linear tangent mapping and its transpose char- acterized). The mapping o = γ ( p ) can, for instance, corre- spond to the case when the observable parameters are the val- ues of the wavefield at some points (a set of seismograms), or the observable parameters can be some more complicated function of the wavefield. We have o = γ ( p ) = γ ( ϕ ( m ) ) = ψ ( m ) .

  16. We have already introduced the series expansion ψ ( m + δ m ) = ψ ( m ) + Ψ δ m + . . . . Introducing Φ the linear operator tangent to ϕ and Γ the linear operator tangent to γ , we can now also write ψ ( m + δ m ) = γ ( ϕ ( m + δ m ) ) = γ ( ϕ ( m ) + Φ δ m + . . . ) = γ ( ϕ ( m ) ) + Γ Φ δ m + . . . = ψ ( m ) + Γ Φ δ m + . . . , so we have Ψ = Γ Φ , and, therefore, Ψ t = Φ t Γ t .

  17. The least-squares (steepest descent) optimization algorithm then becomes m k + 1 = m k − µ ( C prior Φ t k Γ t k C -1 obs ( γ ( ϕ ( m k ) ) − o obs ) + ( m k − m prior ) ) . We already know what Φ t is (last lesson). Let us examine a couple of examples of what Γ t may be.

  18. (i) The observable is one seismogram, i.e., the time dependent value of the wavefield at one space location. The mapping p �→ o = γ ( p ) then is, in fact, a linear mapping p �→ o = Γ p , Γ p = { p ( x , t ) } − → o = { p ( x 0 , t ) } . We have two duality products � � � � p , p � = dV ( x ) p ( x , t ) p ( x , t ) dt � � � � o , o � = o ( x 0 , t ) o ( x 0 , t ) dt � and the transpose Γ t must verify, for any � δ o and any δ p , δ o , Γ δ p � = � Γ t � � � δ o , δ p � , i.e., � � � dt ( Γ t � dt � δ o ( x 0 , t ) δ p ( x 0 , t ) = dV ( x ) δ o )( x , t ) δ p ( x , t ) . � �� � δ ( x − x 0 ) � δ o ( x 0 , t )

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend