computation in high dimensions
play

Computation in High Dimensions Ronald DeVore Collaborators: Peter - PowerPoint PPT Presentation

Computation in High Dimensions Ronald DeVore Collaborators: Peter Binev, Andrea Bonito, Albert Cohen, Wolfgang Dahmen, Bojan Popov, Guergana Petrova, Przemek Wojtaszczyk Toulouse p. 1/24 High Dimensional Numerics


  1. The AFFINE MODEL a ( x ) + � ∞ a ( x, y ) = ¯ j =1 y j ψ j ( x ) , y j ∈ [ − 1 , 1] , j = 1 , 2 , . . . Here we put the normalization into the y j and so behavior of � ψ j � L ∞ ( D ) will be crucial. We assume that we reorder indices so that ( � ψ j � L ∞ ( D ) ) is a decreasing sequence Write u ( x, y ) := u a ( x ) Stochastic setting a ( · , ω ) is an L ∞ (Ω) valued random variable on (Ω , ρ ) Wiener chaos: Choose basis ( ψ j ) a ( x ) + � ∞ a ( x, ω ) = ¯ k =1 y k ( ω ) ψ k ( x ) Normalized so that � y k � L ∞ (Ω) = 1 Toulouse – p. 5/24

  2. The AFFINE MODEL a ( x ) + � ∞ a ( x, y ) = ¯ j =1 y j ψ j ( x ) , y j ∈ [ − 1 , 1] , j = 1 , 2 , . . . Here we put the normalization into the y j and so behavior of � ψ j � L ∞ ( D ) will be crucial. We assume that we reorder indices so that ( � ψ j � L ∞ ( D ) ) is a decreasing sequence Write u ( x, y ) := u a ( x ) Stochastic setting a ( · , ω ) is an L ∞ (Ω) valued random variable on (Ω , ρ ) Wiener chaos: Choose basis ( ψ j ) a ( x ) + � ∞ a ( x, ω ) = ¯ k =1 y k ( ω ) ψ k ( x ) Normalized so that � y k � L ∞ (Ω) = 1 This is embedded in the affine representation: However the role of the probability measure is lost Toulouse – p. 5/24

  3. NUMERICAL GOALS PARAMETRIC GOAL: Given a query a ∈ A , quickly compute a good H 1 0 ( D, a ) approximation to u a . Toulouse – p. 6/24

  4. NUMERICAL GOALS PARAMETRIC GOAL: Given a query a ∈ A , quickly compute a good H 1 0 ( D, a ) approximation to u a . We want an approximation to the solution map F : a → u a or F : y → u ( · , y ) Toulouse – p. 6/24

  5. NUMERICAL GOALS PARAMETRIC GOAL: Given a query a ∈ A , quickly compute a good H 1 0 ( D, a ) approximation to u a . We want an approximation to the solution map F : a → u a or F : y → u ( · , y ) Given a query a ∈ A quickly evaluate F ( a ) ( F ( y ) ) Toulouse – p. 6/24

  6. NUMERICAL GOALS PARAMETRIC GOAL: Given a query a ∈ A , quickly compute a good H 1 0 ( D, a ) approximation to u a . We want an approximation to the solution map F : a → u a or F : y → u ( · , y ) Given a query a ∈ A quickly evaluate F ( a ) ( F ( y ) ) F is Banach space valued function of many (possibly infinitely many) variables (parameters) Toulouse – p. 6/24

  7. NUMERICAL GOALS PARAMETRIC GOAL: Given a query a ∈ A , quickly compute a good H 1 0 ( D, a ) approximation to u a . We want an approximation to the solution map F : a → u a or F : y → u ( · , y ) Given a query a ∈ A quickly evaluate F ( a ) ( F ( y ) ) F is Banach space valued function of many (possibly infinitely many) variables (parameters) STOCHASTIC GOAL: F is a Banach space valued random variable Toulouse – p. 6/24

  8. NUMERICAL GOALS PARAMETRIC GOAL: Given a query a ∈ A , quickly compute a good H 1 0 ( D, a ) approximation to u a . We want an approximation to the solution map F : a → u a or F : y → u ( · , y ) Given a query a ∈ A quickly evaluate F ( a ) ( F ( y ) ) F is Banach space valued function of many (possibly infinitely many) variables (parameters) STOCHASTIC GOAL: F is a Banach space valued random variable Compute stochastic properties of F such as mean, variance, higher moments Toulouse – p. 6/24

  9. NUMERICAL GOALS PARAMETRIC GOAL: Given a query a ∈ A , quickly compute a good H 1 0 ( D, a ) approximation to u a . We want an approximation to the solution map F : a → u a or F : y → u ( · , y ) Given a query a ∈ A quickly evaluate F ( a ) ( F ( y ) ) F is Banach space valued function of many (possibly infinitely many) variables (parameters) STOCHASTIC GOAL: F is a Banach space valued random variable Compute stochastic properties of F such as mean, variance, higher moments These are referred to as Quantities of Interest Toulouse – p. 6/24

  10. Two Strategies If we do nothing but call on a standard (adaptive) FEM solver then for n computations we would get acuracy O ( n − α ) where α is related to the Besov smoothness of u a with respect to the physical domain variable x Toulouse – p. 7/24

  11. Two Strategies If we do nothing but call on a standard (adaptive) FEM solver then for n computations we would get acuracy O ( n − α ) where α is related to the Besov smoothness of u a with respect to the physical domain variable x We hope to improve this rate by another method by taking advantage of the smoothness of the manifold K . Toulouse – p. 7/24

  12. Two Strategies If we do nothing but call on a standard (adaptive) FEM solver then for n computations we would get acuracy O ( n − α ) where α is related to the Besov smoothness of u a with respect to the physical domain variable x We hope to improve this rate by another method by taking advantage of the smoothness of the manifold K . For example, can we take a few snapshots u a i , i = 1 , . . . , m of the manifold K and use these in some sort of interpolation formula to compute u a for any given query a ? Toulouse – p. 7/24

  13. Two Strategies If we do nothing but call on a standard (adaptive) FEM solver then for n computations we would get acuracy O ( n − α ) where α is related to the Besov smoothness of u a with respect to the physical domain variable x We hope to improve this rate by another method by taking advantage of the smoothness of the manifold K . For example, can we take a few snapshots u a i , i = 1 , . . . , m of the manifold K and use these in some sort of interpolation formula to compute u a for any given query a ? If so, where should we take the snapshots ? Toulouse – p. 7/24

  14. Two Strategies If we do nothing but call on a standard (adaptive) FEM solver then for n computations we would get acuracy O ( n − α ) where α is related to the Besov smoothness of u a with respect to the physical domain variable x We hope to improve this rate by another method by taking advantage of the smoothness of the manifold K . For example, can we take a few snapshots u a i , i = 1 , . . . , m of the manifold K and use these in some sort of interpolation formula to compute u a for any given query a ? If so, where should we take the snapshots ? What is the improved accuracy? Toulouse – p. 7/24

  15. Smoothness of K For reduced modeling to work, the smoothness of K is critical Toulouse – p. 8/24

  16. Smoothness of K For reduced modeling to work, the smoothness of K is critical Classical perturbation theorem gives � u a − u ˜ a � H 1 0 ≤ C 0 � f � H − 1 � a − ˜ a � L ∞ Toulouse – p. 8/24

  17. Smoothness of K For reduced modeling to work, the smoothness of K is critical Classical perturbation theorem gives � u a − u ˜ a � H 1 0 ≤ C 0 � f � H − 1 � a − ˜ a � L ∞ Actually much more is true Toulouse – p. 8/24

  18. Smoothness of K For reduced modeling to work, the smoothness of K is critical Classical perturbation theorem gives � u a − u ˜ a � H 1 0 ≤ C 0 � f � H − 1 � a − ˜ a � L ∞ Actually much more is true In the affine case, the function F is an analytic (infinitely differentiable) function of its parameters Toulouse – p. 8/24

  19. Smoothness of K For reduced modeling to work, the smoothness of K is critical Classical perturbation theorem gives � u a − u ˜ a � H 1 0 ≤ C 0 � f � H − 1 � a − ˜ a � L ∞ Actually much more is true In the affine case, the function F is an analytic (infinitely differentiable) function of its parameters Even in the non-affine case, there are principles of analyticity Toulouse – p. 8/24

  20. Smoothness of K For reduced modeling to work, the smoothness of K is critical Classical perturbation theorem gives � u a − u ˜ a � H 1 0 ≤ C 0 � f � H − 1 � a − ˜ a � L ∞ Actually much more is true In the affine case, the function F is an analytic (infinitely differentiable) function of its parameters Even in the non-affine case, there are principles of analyticity But let us emphasize that infinite differentiability, in and of itself, is not enough in high dimensions, as we now discuss Toulouse – p. 8/24

  21. Curse of Dimensionality Since we are wanting to approximate a function ( F ) of many variables we have to be wary of the “Curse” Toulouse – p. 9/24

  22. Curse of Dimensionality Since we are wanting to approximate a function ( F ) of many variables we have to be wary of the “Curse” “Curse of Dimensionality” says classical approaches will not work Toulouse – p. 9/24

  23. Curse of Dimensionality Since we are wanting to approximate a function ( F ) of many variables we have to be wary of the “Curse” “Curse of Dimensionality” says classical approaches will not work Classical methods of approximation are based on the assumption that F has smoothness (of order s ) Toulouse – p. 9/24

  24. Curse of Dimensionality Since we are wanting to approximate a function ( F ) of many variables we have to be wary of the “Curse” “Curse of Dimensionality” says classical approaches will not work Classical methods of approximation are based on the assumption that F has smoothness (of order s ) then with n computations we can only capture F to accuracy C m n − s/m where m is the number of variables Toulouse – p. 9/24

  25. Curse of Dimensionality Since we are wanting to approximate a function ( F ) of many variables we have to be wary of the “Curse” “Curse of Dimensionality” says classical approaches will not work Classical methods of approximation are based on the assumption that F has smoothness (of order s ) then with n computations we can only capture F to accuracy C m n − s/m where m is the number of variables When m is large than C m is exponentialy large Toulouse – p. 9/24

  26. Curse of Dimensionality Since we are wanting to approximate a function ( F ) of many variables we have to be wary of the “Curse” “Curse of Dimensionality” says classical approaches will not work Classical methods of approximation are based on the assumption that F has smoothness (of order s ) then with n computations we can only capture F to accuracy C m n − s/m where m is the number of variables When m is large than C m is exponentialy large Even when F is analytic the curse rears its ugly head Toulouse – p. 9/24

  27. Example (Nowak-Wozniakowski) To drive home the debilitating effect of this curse consider the following example of scalar valued functions Toulouse – p. 10/24

  28. Example (Nowak-Wozniakowski) To drive home the debilitating effect of this curse consider the following example of scalar valued functions Ω := [0 , 1] m , W := { w : � D ν w � L ∞ ≤ 1 , ∀ ν } Toulouse – p. 10/24

  29. Example (Nowak-Wozniakowski) To drive home the debilitating effect of this curse consider the following example of scalar valued functions Ω := [0 , 1] m , W := { w : � D ν w � L ∞ ≤ 1 , ∀ ν } For any subspace V of dimension 2 m/ 2 we have dist( W , V ) L ∞ (Ω) ≥ 1 / 2 Toulouse – p. 10/24

  30. Example (Nowak-Wozniakowski) To drive home the debilitating effect of this curse consider the following example of scalar valued functions Ω := [0 , 1] m , W := { w : � D ν w � L ∞ ≤ 1 , ∀ ν } For any subspace V of dimension 2 m/ 2 we have dist( W , V ) L ∞ (Ω) ≥ 1 / 2 So if m = 100 we would need 2 50 ≈ 10 15 queries or computations to approximate a general w ∈ W with even the crudest of accuracy Toulouse – p. 10/24

  31. Example (Nowak-Wozniakowski) To drive home the debilitating effect of this curse consider the following example of scalar valued functions Ω := [0 , 1] m , W := { w : � D ν w � L ∞ ≤ 1 , ∀ ν } For any subspace V of dimension 2 m/ 2 we have dist( W , V ) L ∞ (Ω) ≥ 1 / 2 So if m = 100 we would need 2 50 ≈ 10 15 queries or computations to approximate a general w ∈ W with even the crudest of accuracy We will see that for reduced modeling the key is that not only is our function F very smooth but in addition the parameters are not democratic Toulouse – p. 10/24

  32. Example (Nowak-Wozniakowski) To drive home the debilitating effect of this curse consider the following example of scalar valued functions Ω := [0 , 1] m , W := { w : � D ν w � L ∞ ≤ 1 , ∀ ν } For any subspace V of dimension 2 m/ 2 we have dist( W , V ) L ∞ (Ω) ≥ 1 / 2 So if m = 100 we would need 2 50 ≈ 10 15 queries or computations to approximate a general w ∈ W with even the crudest of accuracy We will see that for reduced modeling the key is that not only is our function F very smooth but in addition the parameters are not democratic This gets reflected in a certain decay for derivatives Toulouse – p. 10/24

  33. Reduced Basis Methods Basic Steps Toulouse – p. 11/24

  34. Reduced Basis Methods Basic Steps Find a good low dimensional linear space V n ⊂ H 1 0 ( D ) to approximate the elements of K Toulouse – p. 11/24

  35. Reduced Basis Methods Basic Steps Find a good low dimensional linear space V n ⊂ H 1 0 ( D ) to approximate the elements of K Recall K is the range of F Toulouse – p. 11/24

  36. Reduced Basis Methods Basic Steps Find a good low dimensional linear space V n ⊂ H 1 0 ( D ) to approximate the elements of K Recall K is the range of F Given a query a find a numerical method to approximate u a from V n Toulouse – p. 11/24

  37. Reduced Basis Methods Basic Steps Find a good low dimensional linear space V n ⊂ H 1 0 ( D ) to approximate the elements of K Recall K is the range of F Given a query a find a numerical method to approximate u a from V n The latter is done by some projection onto V n Toulouse – p. 11/24

  38. Implementation Finding a good space V n is an off line computation Toulouse – p. 12/24

  39. Implementation Finding a good space V n is an off line computation In practice, this space is usually generated by snapshots u a i of K Toulouse – p. 12/24

  40. Implementation Finding a good space V n is an off line computation In practice, this space is usually generated by snapshots u a i of K Computing the approximation of u a from V n Toulouse – p. 12/24

  41. Implementation Finding a good space V n is an off line computation In practice, this space is usually generated by snapshots u a i of K Computing the approximation of u a from V n This is an online computation and must be fast Toulouse – p. 12/24

  42. Implementation Finding a good space V n is an off line computation In practice, this space is usually generated by snapshots u a i of K Computing the approximation of u a from V n This is an online computation and must be fast The Galerkin projection is error optimal but has the bottleneck in the assembly of the stiffness matrix M and the resulting linear algebra Toulouse – p. 12/24

  43. Implementation Finding a good space V n is an off line computation In practice, this space is usually generated by snapshots u a i of K Computing the approximation of u a from V n This is an online computation and must be fast The Galerkin projection is error optimal but has the bottleneck in the assembly of the stiffness matrix M and the resulting linear algebra In the affine case M = � m i =1 y i M i and this helps Toulouse – p. 12/24

  44. Implementation Finding a good space V n is an off line computation In practice, this space is usually generated by snapshots u a i of K Computing the approximation of u a from V n This is an online computation and must be fast The Galerkin projection is error optimal but has the bottleneck in the assembly of the stiffness matrix M and the resulting linear algebra In the affine case M = � m i =1 y i M i and this helps An alternative is to use a form of interpolation of the snapshots, e.g. polynomial interpolation Toulouse – p. 12/24

  45. Implementation Finding a good space V n is an off line computation In practice, this space is usually generated by snapshots u a i of K Computing the approximation of u a from V n This is an online computation and must be fast The Galerkin projection is error optimal but has the bottleneck in the assembly of the stiffness matrix M and the resulting linear algebra In the affine case M = � m i =1 y i M i and this helps An alternative is to use a form of interpolation of the snapshots, e.g. polynomial interpolation This would be very fast on line but the size of the Lebesgue constant is now a critical issue Toulouse – p. 12/24

  46. Kolmogorov widths It is useful to know the optimal performance we can expect from reduced basis methods Toulouse – p. 13/24

  47. Kolmogorov widths It is useful to know the optimal performance we can expect from reduced basis methods Since they use approximation from linear subspace, the optimal performance is governed by the Kolmogorov width Toulouse – p. 13/24

  48. Kolmogorov widths It is useful to know the optimal performance we can expect from reduced basis methods Since they use approximation from linear subspace, the optimal performance is governed by the Kolmogorov width Kolmogorov widths measure how accurately we can approximation K with n dimensional linear spaces d n ( K ) X := dim( X n )= n sup inf v ∈ X n � u − v � X inf u ∈K Toulouse – p. 13/24

  49. Kolmogorov width of K Distance of K to X n Toulouse – p. 14/24

  50. Kolmogorov widths It is useful to know the optimal performance we can expect from reduced basis methods Since they use approximation from linear subspace, the optimal performance is governred by the Kolmogorov width Kolmogorov widths measure how accurately we can approximation K with n dimensional linear spaces d n ( K ) X := dim( X n )= n sup inf v ∈ X n � w − v � X inf w ∈K Let us recall that the Reduced Basis Methods construct an n dimensional linear space (usually spanned by snapshots of the manifold) to be used with a Galerkin projection Toulouse – p. 15/24

  51. Kolmogorov widths It is useful to know the optimal performance we can expect from reduced basis methods Since they use approximation from linear subspace, the optimal performance is governred by the Kolmogorov width Kolmogorov widths measure how accurately we can approximation K with n dimensional linear spaces d n ( K ) X := dim( X n )= n sup inf v ∈ X n � w − v � X inf w ∈K Let us recall that the Reduced Basis Methods construct an n dimensional linear space (usually spanned by snapshots of the manifold) to be used with a Galerkin projection So d n ( K ) H 1 0 ( D ) tells us the best error we can expect Toulouse – p. 15/24

  52. How well can we do? So the main issues to understand are Toulouse – p. 16/24

  53. How well can we do? So the main issues to understand are How fast does d n ( K ) H 1 0 ( D ) tend to 0 ? Toulouse – p. 16/24

  54. How well can we do? So the main issues to understand are How fast does d n ( K ) H 1 0 ( D ) tend to 0 ? Does the RBM selection of snapshots give close to this optimal rate of performance? Toulouse – p. 16/24

  55. How well can we do? So the main issues to understand are How fast does d n ( K ) H 1 0 ( D ) tend to 0 ? Does the RBM selection of snapshots give close to this optimal rate of performance? Can we give a numerical implementation with online costs reflecting the optimal rate? Toulouse – p. 16/24

  56. What is the n width of K ? Let us start with the first question: what do we know about the n width of K ? Toulouse – p. 17/24

  57. What is the n width of K ? Let us start with the first question: what do we know about the n width of K ? The following discussion is for the affine model Toulouse – p. 17/24

  58. What is the n width of K ? Let us start with the first question: what do we know about the n width of K ? The following discussion is for the affine model Using the analyticity of F , it is easy to show d n ( K ) H 1 0 ( D ) decays exponentially for the AFFINE MODEL when the number of parameters is finite. Toulouse – p. 17/24

  59. What is the n width of K ? Let us start with the first question: what do we know about the n width of K ? The following discussion is for the affine model Using the analyticity of F , it is easy to show d n ( K ) H 1 0 ( D ) decays exponentially for the AFFINE MODEL when the number of parameters is finite. However there are constants in these estimates which grow exponentially in the number of parameters Toulouse – p. 17/24

  60. What is the n width of K ? Let us start with the first question: what do we know about the n width of K ? The following discussion is for the affine model Using the analyticity of F , it is easy to show d n ( K ) H 1 0 ( D ) decays exponentially for the AFFINE MODEL when the number of parameters is finite. However there are constants in these estimates which grow exponentially in the number of parameters So, we need a finer analysis of the effect of the number of parameters Toulouse – p. 17/24

  61. What is the n width of K ? Let us start with the first question: what do we know about the n width of K ? The following discussion is for the affine model Using the analyticity of F , it is easy to show d n ( K ) H 1 0 ( D ) decays exponentially for the AFFINE MODEL when the number of parameters is finite. However there are constants in these estimates which grow exponentially in the number of parameters So, we need a finer analysis of the effect of the number of parameters Cohen-DeVore-Schwab prove results for the AFFINE MODEL with infinitely many parameters and thereby avoid dependence on the number of parameters Toulouse – p. 17/24

  62. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) Toulouse – p. 18/24

  63. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) 0 ≤ C 0 n − 1 /p +1 Theorem ( � ψ j � L ∞ ) j ≥ 1 ∈ ℓ p − → d n ( K ) H 1 Toulouse – p. 18/24

  64. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) 0 ≤ C 0 n − 1 /p +1 Theorem ( � ψ j � L ∞ ) j ≥ 1 ∈ ℓ p − → d n ( K ) H 1 Note that this result is dimension independent Toulouse – p. 18/24

  65. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) 0 ≤ C 0 n − 1 /p +1 Theorem ( � ψ j � L ∞ ) j ≥ 1 ∈ ℓ p − → d n ( K ) H 1 Note that this result is dimension independent This is proved by establishing the analytic expansion ν ∈ Λ φ ν ( x ) y ν with ( � φ ν � H 1 u ( x, y ) = � 0 ) ∈ ℓ p Toulouse – p. 18/24

  66. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) 0 ≤ C 0 n − 1 /p +1 Theorem ( � ψ j � L ∞ ) j ≥ 1 ∈ ℓ p − → d n ( K ) H 1 Note that this result is dimension independent This is proved by establishing the analytic expansion ν ∈ Λ φ ν ( x ) y ν with ( � φ ν � H 1 u ( x, y ) = � 0 ) ∈ ℓ p φ ν = D ν F (0) ν ! Toulouse – p. 18/24

  67. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) 0 ≤ C 0 n − 1 /p +1 Theorem ( � ψ j � L ∞ ) j ≥ 1 ∈ ℓ p − → d n ( K ) H 1 Note that this result is dimension independent This is proved by establishing the analytic expansion ν ∈ Λ φ ν ( x ) y ν with ( � φ ν � H 1 u ( x, y ) = � 0 ) ∈ ℓ p φ ν = D ν F (0) ν ! A good n dimensional subspace corresponds to choosing the n terms with the largest coefficients: 0 ( D ) are largest � φ ν � H 1 Toulouse – p. 18/24

  68. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) 0 ≤ C 0 n − 1 /p +1 Theorem ( � ψ j � L ∞ ) j ≥ 1 ∈ ℓ p − → d n ( K ) H 1 Note that this result is dimension independent This is proved by establishing the analytic expansion ν ∈ Λ φ ν ( x ) y ν with ( � φ ν � H 1 u ( x, y ) = � 0 ) ∈ ℓ p φ ν = D ν F (0) ν ! A good n dimensional subspace corresponds to choosing the n terms with the largest coefficients: 0 ( D ) are largest � φ ν � H 1 So snapshots are all taken at y = 0 : these are not necessarily on the manifold Toulouse – p. 18/24

  69. CDS Results a ( x ) + � ∞ For the Affine Model a ( x, y ) = ¯ j =1 y j ψ j ( x ) 0 ≤ C 0 n − 1 /p +1 Theorem ( � ψ j � L ∞ ) j ≥ 1 ∈ ℓ p − → d n ( K ) H 1 Note that this result is dimension independent This is proved by establishing the analytic expansion ν ∈ Λ φ ν ( x ) y ν with ( � φ ν � H 1 u ( x, y ) = � 0 ) ∈ ℓ p φ ν = D ν F (0) ν ! A good n dimensional subspace corresponds to choosing the n terms with the largest coefficients: 0 ( D ) are largest � φ ν � H 1 So snapshots are all taken at y = 0 : these are not necessarily on the manifold Coefficients of basis expansion are simple monomials Toulouse – p. 18/24

  70. Numerical Recipe The above leads to a numerical recipe Chifka-Cohen-DeVore-Schwab based on the following: Toulouse – p. 19/24

  71. Numerical Recipe The above leads to a numerical recipe Chifka-Cohen-DeVore-Schwab based on the following: The set of Λ n of monomial indices can be chosen with the property Lower Set Property: If ν ∈ Λ n and µ < ν then µ ∈ Λ n Toulouse – p. 19/24

  72. Numerical Recipe The above leads to a numerical recipe Chifka-Cohen-DeVore-Schwab based on the following: The set of Λ n of monomial indices can be chosen with the property Lower Set Property: If ν ∈ Λ n and µ < ν then µ ∈ Λ n If Λ n is the current set of indices, it is expanded at next iteration Toulouse – p. 19/24

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend