course on inverse problems
play

Course on Inverse Problems Albert Tarantola Lesson XVIII: Example: - PowerPoint PPT Presentation

Institut de Physique du Globe de Paris & Universit Pierre et Marie Curie (Paris VI) Course on Inverse Problems Albert Tarantola Lesson XVIII: Example: Ray Tomography Without Blocks 7.3 Ray Tomography Without Blocks 195 7.3 Ray


  1. Institut de Physique du Globe de Paris & Université Pierre et Marie Curie (Paris VI) Course on Inverse Problems Albert Tarantola Lesson XVIII: Example: Ray Tomography Without Blocks

  2. 7.3 Ray Tomography Without Blocks 195 7.3 Ray Tomography Without Blocks 7.3.1 The Problem Although in this numerical exercise we consider a 2D medium, nothing fundamental changes when passing to 3D. The students are encouraged to attempt such an exercice. In a 2D medium, the travel time along a ray R is computed via � t = R ds n ( x ( s ) , y ( s )) , (7.23) where n ( x , y ) is the slowness of the medium (inverse of the velocity), and where s is the coordinate along the ray that corresponds to the length. The equations x = x ( s ) , and y = y ( s ) , are the parametric equations defining the ray R . In principle, the ray has to be found using, for instance, Fermat’s principle (the actual ray path is, among all neighboring paths, the one that minimizes the integral in equation 7.23), but in this simplified exercise we are just going to assume that the rays are straight. We are going to “measure” the travel times along some rays, and use these “observations” to obtain information on n ( x , y ) .

  3. 7.3.2 Basic Theory We have here two linear spaces. This first linear space, M M M , is the space of all conceivable slowness models, that is infinite-dimensional. Each element of this space is a function n = { n ( x , y ) } , and the sum of two such functions, or the multiplication of a function by a real number number are defined in the obvious way (the reader must remember that we are using here an approximation, and that we should, instead, be using the logarithm of the slowness). M ∗ , dual of M M M M , is made by some other functions ν = { ν ( x , y ) } and the duality The space M product is expressed by � � � ν , n � M M = dy ν ( x , y ) n ( x , y ) dx . (7.24) M The second linear space, O O O is the space of all conceivable travel-time observations, that is finite-dimensional. Each element of this space is a discrete vector t = { t i } , and the sum of two such vectors, or the multiplication of a vector by a real number number are defined O ∗ , dual of O in the obvious way. The space O O O O , is made by some other vectors τ = { τ i } and the duality product is expressed by O = ∑ τ i t i � τ , t � O . (7.25) O i Equation 7.23 defines a linear mapping L from M M M into O O O : � t i = t = L n ⇔ R i ds n ( x ( s ) , y ( s )) . (7.26) We may greatly simplify the discussion if we write this relation as if the operator L could be represented by an ordinary integral kernel, i.e., as if we had � � t i = dy L i ( x , y ) n ( x , y ) t = L n ⇔ dx . (7.27)

  4. This writing is possible as far as we accept to introduce the delta-like “functions” (i.e., in fact, distributions) that we could write L i ( x , y ) = δ ( x , y ; R i ) . (7.28) They would be are zero everywhere excepted along the ray i . More precisely, the “functions” L i ( x , y ) are defined by the condition that for any function n ( x , y ) , � � � dy L i ( x , y ) n ( x , y ) = dx R i ds n ( x ( s ) , y ( s )) . (7.29) For the interpretation of the least-squares formulas, we need to know the meaning to be given to an expression like ν = L t τ , where L t is the transpose of the linear operator L . By O ∗ , one must have definition of transpose operator, for any n ∈ M M M and any τ ∈ O O � n , L t τ � M M = � τ , L n � O . (7.30) M O O It is possible to directly use this definition of the transpose of an operator. But it is much simpler to use the fact that the kernel of the transpose of an integral operator is the same as the kernel of the original operator, the ony difference being that the sum is performed over the complementaty variables. That is, if the expression L n corresponds to a sum over the variables x and y of the kernel L i ( x , y ) , an expression like L t τ corresponds to a sum over the variable i of the same kernel. That is, while having in view equation 7.27, we shall have ν ( x , y ) = ∑ ν = L t τ L i ( x , y ) τ i ⇔ , (7.31) i where one can think on the “functions” L i ( x , y ) as being the delta-like functions written in equation 7.28 or as corresponding to an implicit definition whose operational meaning is that in equation 7.29. Sometimes one interpretation is better than the other. Let us see this. The inputs to the problem are,

  5. The inputs to the problem are, • the linear operator L defined in equation 7.26, that solves the forward modeling prob- lem; • the prior slowness model n prior = { n prior ( x , y ) } ; • the prior covariance operator C prior whose kernel is some given covariance function C prior ( x , y , x ′ , y ′ ) ’,; • the observed travel times t obs = { t i obs } ; • the covariance operator C obs describing the experimental uncertainties; its kernel is some given matrix C ij obs . A steepest descent algorithm (for this linear problem) would be n k + 1 = n k − µ ( C prior L t C -1 obs ( L n k − t obs ) +( n k − n prior )) , (7.32) � �� � � �� � ∆ t = L n k − t obs T t = C prior L t � �� � ∆ t = C -1 � obs ∆ t � �� � ∆ n = T t � ∆ t

  6. i.e., n ( x , y ) k + 1 = n ( x , y ) k − µ ( ∆ n ( x , y ) + n ( x , y ) k − n ( x , y ) prior ) . (7.33) Let us see how ∆ n is to be computed. First, the expression ∆ t = L n k − t obs is written, explicitly, � ∆ t i = R i ds n ( x , y ) k − t i . (7.34) obs Second, the expression ∆ t = C -1 � obs ∆ t (7.35) is an ordinary matrix equation that does not require further explanation. Third, the integral kernel of the operator defined as T = L C prior can be easily expressed using the integral kernels of L and C prior , � � dy ′ L i ( x ′ , y ′ ) C prior ( x ′ , y ′ , x , y ) dx ′ T i ( x , y ) = (7.36) Using the defining property of the kernel L i ( x , y ) (equation 7.29) then gives � T i ( x , y ) = R i ds C prior ( x ( s ) , y ( s ) , x , y ) . (7.37) The function T i ( x , s ) is a sort of “tube” along ray i , with a “width” equal to the width of the covariance function. Finally, the explicit version of the expression ∆ n = T t � ∆ t is ∆ n ( x , y ) = ∑ T i ( x , y ) � ∆ t i . (7.38) i

  7. � The four boxed equations provide the complete expressions for the implementation of the steepest descent iterative algorithm. Let’s now see what would be the expressions to be used if implementing the two basic equations of the linear least squares theory, that, for the present problem, we choose to be ( L C prior L t + C obs C post = C prior − C prior L t ) -1 L C prior (7.39) � �� � � �� � � �� � T t = C prior L t S = L C prior L t + C obs T = L C prior � �� � Q = S -1 � �� � P = T t Q T and ( L C prior L t + C obs n post = n prior − C prior L t ) -1 ( L n prior − t obs ) (7.40) � �� � � �� � � �� � T t = C prior L t S = L C prior L t + C obs ∆ t = L n prior − t obs � �� � Q = S -1 � �� � � ∆ t = Q ∆ t � �� � ∆ n = T t � ∆ t

  8. It is easy to see that the explicit version of the first equation is C post ( x , y , x ′ , y ′ ) = C prior ( x , y , x ′ , y ′ ) − ∑ Q ij T j ( x ′ , y ′ ) T i ( x , y ) ∑ , (7.41) i j where the “tube functions” T i ( x , y ) have been introduced in equation 7.37. The Q ij are the entries of the matrix Q = S -1 . (7.42) And it is easy to see that the matrix S = L C prior L t + C obs can be evaluated as � � � � S ij = dy ′ C prior ( x , y , x ′ , y ′ ) L j ( x ′ , y ′ ) + C ij dy L i ( x , y ) dx ′ dx obs � � � R j ds ′ C prior ( x , y , x ′ ( s ′ ) , y ′ ( s ′ )) + C ij dy L i ( x , y ) = dx (7.43) obs � � R j ds ′ C prior ( x ( s ) , y ( s ) , x ′ ( s ′ ) , y ′ ( s ′ )) + C ij = R i ds obs i.e., � � S ij = R j ds ′ C prior ( x ( s ) , y ( s ) , x ′ ( s ′ ) , y ′ ( s ′ )) + C ij R i ds . (7.44) obs

  9. Finally, the explicit version of the second least-squares equation is n post ( x , y ) = n prior ( x , y ) − ∆ n ( x , y ) , (7.45) The explicit version of ∆ n = T t � ∆ t is ∆ n ( x , y ) = ∑ T i ( x , y ) � ∆ t i , (7.46) i where the “tube functions” T i ( x , y ) have been introduced above. Remark that ∆ n ( x , y ) , the “correction” to the applied to n prior ( x , y ) to get n post ( x , y ) , is just a “sum of tubes”. Introducing the “residuals” ∆ t = L n prior − t obs , i.e., � ∆ t i = R i ds n prior ( x ( s ) , y ( s )) − t i , (7.47) obs the “weighted residuals” are � ∆ t = Q ∆ t , (7.48) the matrix Q having already been introduced in equations 7.42 and 7.44. The last four boxed equations correspond to the actual computations to be done when implementing the second of the linear least-squares equations in the present context.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend