On a new orthonormal basis for RBF native spaces and its fast - - PowerPoint PPT Presentation

on a new orthonormal basis for rbf native spaces and its
SMART_READER_LITE
LIVE PREVIEW

On a new orthonormal basis for RBF native spaces and its fast - - PowerPoint PPT Presentation

On a new orthonormal basis for RBF native spaces and its fast computation Stefano De Marchi and Gabriele Santin Torino: June 11, 2014 Outline Introduction 1 Change of basis 2 3 WSVD Basis The new basis 4 5 Numerical Results Further


slide-1
SLIDE 1

On a new orthonormal basis for RBF native spaces and its fast computation

Stefano De Marchi and Gabriele Santin Torino: June 11, 2014

slide-2
SLIDE 2

Outline

1

Introduction

2

Change of basis

3

WSVD Basis

4

The new basis

5

Numerical Results

6

Further work

7

References

2 of 33

slide-3
SLIDE 3

Introduction

RBF Approximation

1 Data: Ω ⊂ Rn, X ⊂ Ω, test function f

X = {x1, . . . , xN} ⊂ Ω f1, . . . , fN, where fi = f(xi)

3 of 33

slide-4
SLIDE 4

Introduction

RBF Approximation

1 Data: Ω ⊂ Rn, X ⊂ Ω, test function f

X = {x1, . . . , xN} ⊂ Ω f1, . . . , fN, where fi = f(xi)

2 Approximation setting: kernel Kε, NK (Ω), NK (X) ⊂ NK (Ω)

kernel K = Kε, positive definite and radial native space NK (Ω) (where K is the reproducing kernel) finite subspace NK (X) = span{K(·, x) : x ∈ X} ⊂ NK (Ω)

3 of 33

slide-5
SLIDE 5

Introduction

RBF Approximation

1 Data: Ω ⊂ Rn, X ⊂ Ω, test function f

X = {x1, . . . , xN} ⊂ Ω f1, . . . , fN, where fi = f(xi)

2 Approximation setting: kernel Kε, NK (Ω), NK (X) ⊂ NK (Ω)

kernel K = Kε, positive definite and radial native space NK (Ω) (where K is the reproducing kernel) finite subspace NK (X) = span{K(·, x) : x ∈ X} ⊂ NK (Ω)

Aim

Find sf ∈ NK (X) s.t. sf ≈ f

3 of 33

slide-6
SLIDE 6

Introduction

Problem setting and questions

Problem: the standard basis of translates (data-dependent) of

NK (X) is unstable and not flexible Question 1

Is it possible to find a “better” basis U of NK (X)?

Question 2

How to embed information about K and Ω in the basis U?

Question 3

Can we extract U′ ⊂ U s.t. s′

f is as good as sf ?

4 of 33

slide-7
SLIDE 7

The “natural” basis

The “natural” (data-independent) basis for Hilbert spaces (Mercer’s theorem,1909)

Let K be a continuous, positive definite kernel on a bounded Ω ⊂ Rn. Then K has an eigenfunction expansion with non-negative coefficients, the eigenvalues, s.t. K(x, y) =

  • j=0

λjϕj(x)ϕj(y), ∀ x, y ∈ Ω . Moreover, λjϕj(x) =

K(x, y)ϕj(y)dy := T [ϕj](x) , ∀x ∈ Ω, j ≥ 0 {ϕj}j>0

  • rthonormal ∈ NK (Ω)

{ϕj}j>0

  • rthogonal ∈ L2(Ω), ϕj2

L2(Ω) = λj

− → 0,

  • j>0

λj = K(0, 0) |Ω|, (the operator is of trace-class)

Notice: to find the functions ϕi explicitely it is not always possible [Fasshauer,McCourt 2012, for GRBF].

5 of 33

slide-8
SLIDE 8

Change of basis

Notation

Letting Ω ⊂ Rn and X = {x1, . . . , xN} ⊂ Ω

TX = {K(·, xi), xi ∈ X}: the standard basis of translates; U = {ui ∈ NK(Ω), i = 1, . . . , N}: another basis s.t.

span(U) = span(TX) := NK(Ω) . At x ∈ Ω, TX and U can be expressed as the row vectors T(x) = [K(x, x1), . . . , K(x, xN)] ∈ RN U(x) = [u1(x), . . . , uN(x)] ∈ RN . we need also the scalar products

(f, g)2

L2(Ω) :=

f(x)g(x)dx ≈

N

  • j=1

wjf(xj)g(xj) =: (f, g)2

ℓw

2 (X) .

6 of 33

slide-9
SLIDE 9

Change of basis

General idea

Some useful results [Pazouki,Schaback 2011]

Change of basis

Let Aij = K(xi, xj) ∈ RN×N. Any basis U arises from a factorization A = VU · CU

−1, where VU = (uj(xi))1i,jN and CU is the matrix of change

  • f basis s.t. U(x) = T(x) · CU.

Some consequences of this factorization

7 of 33

slide-10
SLIDE 10

Change of basis

General idea

Some useful results [Pazouki,Schaback 2011]

Change of basis

Let Aij = K(xi, xj) ∈ RN×N. Any basis U arises from a factorization A = VU · CU

−1, where VU = (uj(xi))1i,jN and CU is the matrix of change

  • f basis s.t. U(x) = T(x) · CU.

Some consequences of this factorization

  • 1. The interpolant Pf,X at x can be written as

Pf,X(x) =

N

  • j=1

Λj(f)uj(x) = U(x)Λ(f), ∀x ∈ Ω where Λ(f) = [Λ1(f), . . . , ΛN(f)]T ∈ RN is a column vector of values

  • f linear functionals defined by

Λ(f) = CU

−1 · A−1 · fX = VU −1 · fX ,

where fX is the column vector given by the evaluations of f at X.

7 of 33

slide-11
SLIDE 11

Change of basis

Consequences

  • 2. If U is a NK (Ω)-orthonormal basis, we get the stability estimate
  • Pf,X(x)
  • K(0, 0) fK

∀x ∈ Ω . (1) In particular, for fixed x ∈ Ω and f ∈ NK the values U(x)2 and Λ(f)2, are the same for all NK (Ω)-orthonormal bases independently on X U(x)2

  • K(0, 0),

Λ(f)2 fK . (2)

8 of 33

slide-12
SLIDE 12

Change of basis

Other results [Pazouki,Schaback 2011]

Change of basis

Each NK (Ω)-orthonormal basis U arises from an

  • rthornormal decomposition A = BT · B with

B = CU

−1, VU = BT = (CU −1)T.

Each ℓ2(X)-orthonormal basis U arises from a decomposition A = Q · B with Q = VU, QTQ = I, B = CU

−1 = QTA. 9 of 33

slide-13
SLIDE 13

Change of basis

Other results [Pazouki,Schaback 2011]

Change of basis

Each NK (Ω)-orthonormal basis U arises from an

  • rthornormal decomposition A = BT · B with

B = CU

−1, VU = BT = (CU −1)T.

Each ℓ2(X)-orthonormal basis U arises from a decomposition A = Q · B with Q = VU, QTQ = I, B = CU

−1 = QTA.

Notice: the best bases in terms of stability are the

NK (Ω)-orthonormal ones!

9 of 33

slide-14
SLIDE 14

Change of basis

Other results [Pazouki,Schaback 2011]

Change of basis

Each NK (Ω)-orthonormal basis U arises from an

  • rthornormal decomposition A = BT · B with

B = CU

−1, VU = BT = (CU −1)T.

Each ℓ2(X)-orthonormal basis U arises from a decomposition A = Q · B with Q = VU, QTQ = I, B = CU

−1 = QTA.

Notice: the best bases in terms of stability are the

NK (Ω)-orthonormal ones!

Q1: It is possible to find a “better” basis? Yes, we can!

9 of 33

slide-15
SLIDE 15

WSVD Basis

Main idea: I

Q2: How to embed information on K and Ω in U?

Symmetric Nystr¨

  • m method [Atkinson,Han 2001]

The main idea for the construction of our basis is to discretize the “natural” basis introduced in Mercer’s theorem. To this aim, consider on Ω a cubature rule (X, W), that is a set of distinct points X = {xj}N

j=1 ⊂ Ω and a set of positive weights

W = {wj}N

j=1, N ∈ N, such that

f(y)dy ≈

N

  • j=1

f(xj)wj

∀f ∈ NK(Ω) .

(3)

10 of 33

slide-16
SLIDE 16

WSVD Basis

Main idea: II

Thus, the operator TK can be evaluated on X as

λjϕj(xi) =

K(xi, y)ϕj(y)dy i = 1, . . . , N, ∀j > 0, and then discretized using the cubature rule by

λjϕj(xi) ≈

N

  • h=1

K(xi, xh)ϕj(xh)wh i, j = 1, . . . , N. (4) Letting W = diag(wj), it suffices to solve the following discrete eigenvalue problem in order to find the approximation of the eigenvalues and eigenfunctions (evaluated on X) of TK[f]:

λv = (A · W)v

(5)

11 of 33

slide-17
SLIDE 17

WSVD basis

Main idea: III

A solution is to rewrite (4) using the fact that the weights are positive as

λj( √wiϕj(xi)) =

N

  • h=1

( √wiΦ(xi, xh) √wh)( √whϕj(xh)) ∀i, j = 1, . . . , N ,

(6) and then to consider the corresponding scaled eigenvalue problem

λ √

W · v

  • =

W · A ·

W

W · v

  • which is equivalent to the previous one, now involving the

symmetric and positive definite matrix AW :=

W · A ·

W.

12 of 33

slide-18
SLIDE 18

WSVD Basis

Definition

{λj, ϕj}j>0 are then approximated by eigenvalues/eigenvectors of

AW :=

W · A ·

  • W. This matrix is normal, then a singular value

decomposition of AW is a unitary diagonalization.

Definition:

A weighted SVD basis U is a basis for NK (X) s.t. VU =

W−1 · Q · Σ, CU =

W · Q · Σ−1 since A = VUCU

−1, then AW = Q · Σ2 · QT is the SVD .

Here Σjj = σj, j = 1, . . . , N and σ2

1 ≥ · · · ≥ σ2 N > 0 are the singular

values of AW.

13 of 33

slide-19
SLIDE 19

WSVD Basis

Properties

This basis is in fact an approximation of the “natural” one (provided wi > 0, N

i=1 wi = |Ω|) 14 of 33

slide-20
SLIDE 20

WSVD Basis

Properties

This basis is in fact an approximation of the “natural” one (provided wi > 0, N

i=1 wi = |Ω|)

Properties of the new basis U (cf. [De Marchi-Santin 2013])

uj(x) = 1

σ2

j N

  • i=1

wiuj(xi)K(x, xi) ≈ 1

σ2

j

TK[uj](x), ∀ 1 j N, ∀x ∈ Ω; NK (Ω)-orthonormal ℓw

2 (X)-orthogonal, uj2 ℓw

2 (X) = σ2

j

∀uj ∈ U

N

  • j=1

σ2

j = K(0, 0) |Ω| 14 of 33

slide-21
SLIDE 21

WSVD Basis

Approximation

Interpolant: sf(x) = N

j=1(f, uj)Kuj(x)

∀x ∈ Ω

WDLS: s

M f := argmin

  • f − g

ℓw 2 (X) : g ∈ span{u1, . . . , uM}

  • Weighted Discrete Least Squares as truncation:

Let f ∈ NK (Ω), 1 M N. Then ∀x ∈ Ω

s

M f (x)

=

M

  • j=1

(f, uj)ℓw

2 (X)

(uj, uj)ℓw

2 (X)

uj(x) =

M

  • j=1

(f, uj)ℓw

2 (X)

σ2

j

uj(x) =

M

  • j=1

(f, uj)Kuj(x)

15 of 33

slide-22
SLIDE 22

WSVD Basis

Approximation

Interpolant: sf(x) = N

j=1(f, uj)Kuj(x)

∀x ∈ Ω

WDLS: s

M f := argmin

  • f − g

ℓw 2 (X) : g ∈ span{u1, . . . , uM}

  • Weighted Discrete Least Squares as truncation:

Let f ∈ NK (Ω), 1 M N. Then ∀x ∈ Ω

s

M f (x)

=

M

  • j=1

(f, uj)ℓw

2 (X)

(uj, uj)ℓw

2 (X)

uj(x) =

M

  • j=1

(f, uj)ℓw

2 (X)

σ2

j

uj(x) =

M

  • j=1

(f, uj)Kuj(x) Q3: Can we extract U′ ⊂ U s.t. s′

f is as good as sf ? Yes we can:

take U′ = {u1, . . . , uM}.

15 of 33

slide-23
SLIDE 23

WSVD Basis

Approximation II

If we define the pseudo-cardinal functions as ˜ ℓi = s

M ℓi , we get

s

M f (x) =

N

  • i=1

f(xi)˜ ℓi(x), ˜ ℓi(x) =

M

  • j=1

uj(xi) σ2

j

uj(x).

Generalized Power Function and Lebesgue constant:

If f ∈ NK (Ω),

  • f(x) − s

M f (x)

  • P

(M) K,X (x)fNK (Ω) ∀x ∈ Ω, where

  • P

(M) K,X (x)

2 = K(0, 0) −

M

  • j=1

[uj(x)]2. Moreover, s

M f ∞ ˜

ΛXfX .

16 of 33

slide-24
SLIDE 24

WSVD Basis

Sub-basis

How can we extract U′ ⊂ U s.t. s′

f is as good as sf ? Idea.

100 200 300 400 500 600 700 800 900 10

−20

10

−15

10

−10

10

−5

10 10

5

← ε = 1 ← ε = 4 ← ε = 9

Figure: Gaussian kernel, σ2

j on equispaced

points of the square

recall that

ujℓw

2 (X) = σ2

j → 0

we can choose M s.t.

σ2

M+1 < tol

we don’t need uj, j > M

17 of 33

slide-25
SLIDE 25

WSVD Basis

An Example: I

−0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 −1 −0.5 0.5 1 −1 −0.5 0.5 1 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1

Figure: The domains used in the numerical experiments with an example

  • f the corresponding sample points.

From left to right: the lens Ω1 (trigonometric-gaussian points), the disk Ω2 (trigonometric-gaussian points) and the square Ω3 (product Gauss-Legendre points).

18 of 33

slide-26
SLIDE 26

WSVD Basis

An Example: II

ε = 1 ε = 4 ε = 9

Gaussian 100 340 500 IMQ 180 580 580 Matern3 460 560 580

Table: Optimal M for different kernels and shape parameter that correspond to the indexes such that the weighted least-squares approximant s

M f provides the best approximation of the function

f(x, y) = cos(20(x + y)) on the disk with center C = (1/2, 1/2) and radius R = 1/2

19 of 33

slide-27
SLIDE 27

WSVD Basis

An Example: III

100 200 300 400 500 600 700 800 900 1000 10

−6

10

−4

10

−2

10 10

2 100 200 300 400 500 600 700 800 900 1000 10

−6

10

−5

10

−4

10

−3

10

−2

10

−1

10 10

1

10

2

10

3

WSvd Std

100 200 300 400 500 600 700 800 900 1000 10

−6

10

−4

10

−2

10 10

2

100 200 300 400 500 600 700 800 900 1000 10

−7

10

−6

10

−5

10

−4

10

−3

10

−2

10

−1

10 10

1

10

2

10

3

WSvd Std

Figure: Franke’s test function, lens, IMQ Kernel, ε = 1 and RMSE. Left: complete basis. Right: σ2

M+1 < 10−17.

20 of 33

slide-28
SLIDE 28

WSVD Basis

An Example: III

100 200 300 400 500 600 700 800 900 1000 10

−6

10

−4

10

−2

10 10

2 100 200 300 400 500 600 700 800 900 1000 10

−6

10

−5

10

−4

10

−3

10

−2

10

−1

10 10

1

10

2

10

3

WSvd Std

100 200 300 400 500 600 700 800 900 1000 10

−6

10

−4

10

−2

10 10

2

100 200 300 400 500 600 700 800 900 1000 10

−7

10

−6

10

−5

10

−4

10

−3

10

−2

10

−1

10 10

1

10

2

10

3

WSvd Std

Figure: Franke’s test function, lens, IMQ Kernel, ε = 1 and RMSE. Left: complete basis. Right: σ2

M+1 < 10−17.

Problem: We have to compute the whole basis before truncation! Solution: Krylov methods.

20 of 33

slide-29
SLIDE 29

The new basis

Arnoldi iteration

Consider Aij = K(xi, xj) (which is symmetric), bi = f(xi), 1 i, j N define the Krylov subspace KM(A, b) = span{b, Ab, . . . , A M−1b}, M ≪ N. compute an o.n. basis {φ1, . . . , φM} of KM(A, b) and form ΦM = [φ1, . . . , φM], N × M define the (tridiagonal) matrix HM = ΦT

MAΦM which represents the

projection of A into KM(A, b) Arnoldi iteration gives AΦM = ΦM+1HM where HM =

  • HM

hM+1,MeT

M

  • is

(M + 1) × M. In practice hM+1,M ≈ 0 so that KM+1(A, b) = KM(A, b).

21 of 33

slide-30
SLIDE 30

The new basis

Approximation of the SVD

Consider a SVD HM = UMΣ2

MVT M, where UM ∈ R

(M+1)×(M+1),

VM ∈ R

M×M, Σ2

M =

˜ Σ2

M, 0

T and ˜ Σ2

M = diag(σ2 M,1, . . . , σ2 M,M).

Approximate SVD (Novati-Russo 2013:)

Let UM = ΦM+1UM, VM = ΦMVM, then AVM = UMΣ2

M, AUM = VM(Σ2 M)T

the first M singular values of A are well approx. by σ2

M,j

If M = N, in exact arithmetic the triplet

  • ΦM+1 ˜

UM, ˜

ΣM, ΦMVM

  • is a SVD of A, where ˜

UM is UM without the last column.

22 of 33

slide-31
SLIDE 31

The new basis

Definition

Recall: AΦM = ΦM+1HM HM = UMΣ2

MVT M

Σ2

M =

˜ Σ2

M, 0

T ˜

UM is UM without the last column.

Definition:

The sub-basis UM is a set {u1, . . . , uM} ⊂ NK (X) defined by VU = ΦM+1 ˜ UM ˜

ΣM,

CU = ΦMVM ˜

Σ−1

M . 23 of 33

slide-32
SLIDE 32

The new basis

Properties

Properties [De Marchi, Santin 2014]:

The sub-basis UM has the following properties for each 1 M N:

1 it is ℓ2(X)-orthogonal with ujℓ2(X) = σ2 M,j 1 j M 2 it is near-orthonormal in NK (Ω) 3 if M = N it is the SVD basis U (ΦM = I)

About point 2: it means that (ui, uj) = δij + r(M)

ij

where

(R(M))ij := r(M)

ij

is a rank one matrix for 1 M N, and r(M)

ij

= 0 if

M = N;

24 of 33

slide-33
SLIDE 33

The new basis

Properties

Using this basis we get ∀f ∈ NK (Ω) s′

f (x) =

M

  • j=1

(f, uj)ℓ2(X) σ2

M,j

uj(x) =

M

  • j=1

(f, uj)Kuj(x) ∀x ∈ Ω

(and P

(M) K,X (x), ˜

ΛX as before)

25 of 33

slide-34
SLIDE 34

Numerical Results

Stopping rule: I

Fix τ > 0

  • HM
  • M+1,M ≈ σ2

M,j < τ

  • r a better choice is the following
  • M
  • j=1

(HM)jj − N

  • < τ,

(7) that works for functions lying in the native space of the kernel.

26 of 33

slide-35
SLIDE 35

Numerical Results

Stopping rule: II

The decay of the residual described in (7) compared to the corresponding RMSE goes as in the Figure below with τ = 1.e − 15

50 100 150 10

−16

10

−14

10

−12

10

−10

10

−8

10

−6

10

−4

10

−2

10 10

2

m hm+1,m σm,m

2

svd(A) 50 100 150 10

−10

10

−9

10

−8

10

−7

10

−6

10

−5

10

−4

10

−3

10

−2

10

−1

10 m RMSE

Figure: Gaussian kernel, ε = 1, square [−1, 1]2, N = 200 e.s. points, f ∈ NK (Ω), with f(x) = K(x, y1) + 2K(x, y2) − 2K(x, y3) + 3K(x, y4), y1 = (0, −1.2), y2 = (−0.4, 0.5), y3 = (−0.4, 1.1), y4 = (1.2, 1.3).

27 of 33

slide-36
SLIDE 36

Numerical Results

CPU time comparison

For each N we compute the optimal M using the new algorithm with tolerance τ = 1e − 14.

N 121 225 361 529 729 961 1225 1521

  • ptimal M

100 110 113 114 114 115 115 116 RMSE 5.0e-08 3.4e-10 1.0e-10 6.7e-11 6.4e-11 5.5e-11 4.7e-11 3.4e-11 new RMSE 5.0e-08 3.3e-09 1.3e-09 1.1e-09 1.1e-09 8.3e-10 7.8e-10 7.9e-10 WSVD Time 1.7e-01 3.4e-01 6.1e-01 1.0e+00 1.6e+00 2.6e+00 3.7e+00 6.5e+00 new Time 3.3e-01 7.2e-01 1.5e+00 4.2e+00 1.0e+01 2.5e+01 5.5e+01 1.1e+02 WSVD

Table: Comparison between the WSVD basis and the new basis. Computational time in seconds and corresponding RMSE, restricted to n = 49, . . . , 1600 equally spaced points.

28 of 33

slide-37
SLIDE 37

Approximated Power function and Lebesgue function

Figure: Approximated power function (left, logarithmic plot) and approximated Lebesgue function (right) Ω1

29 of 33

slide-38
SLIDE 38

Numerical Results

Another example

−1 −0.5 0.5 1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 −1 −0.5 0.5 1 −1 −0.5 0.5 1 0.5 1 1.5 2 2.5 3 3.5 −7 −6 −5 −4 −3 −2

Figure: IMQ kernel, ε = 1, cutted-disk, N = 1600 random points, M = 260, f(x, y) = exp(|x − y|) − 1

30 of 33

slide-39
SLIDE 39

Numerical Results

Lebesgue Constant and Power Function

−1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 ΛX = 160.2017 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 −1 −0.8 −0.6 −0.4 −0.2 0.2 0.4 0.6 0.8 1 4 5 6 7 8 9 x 10

−6

Figure: IMQ kernel, ε = 1, cutted-disk, N = 1600 random points, M = 260, f(x, y) = exp(|x − y|) − 1. Left: Lebesgue function. Right: power function

31 of 33

slide-40
SLIDE 40

Further work

More analysis has to be done a better stopping rule understand the decay rate of P

(M) K,X

understand the growing rate of ˜

ΛX

understand how X, ε influence s′

f

32 of 33

slide-41
SLIDE 41

Further work

More analysis has to be done a better stopping rule understand the decay rate of P

(M) K,X

understand the growing rate of ˜

ΛX

understand how X, ε influence s′

f

Thank you for your attention!

32 of 33

slide-42
SLIDE 42

References

  • R. K. Beatson, J. B. Cherrie, C. T. Mouat .

Fast fitting of radial basis functions: methods based on preconditioned GMRES iteration.

  • Adv. Comp. Math., 11 : 253–270, 1999.
  • S. De Marchi, G. Santin.

A new stable basis for radial basis function interpolation.

  • J. Comput. Appl. Math., 253 : 1–13, 2013.
  • S. De Marchi, G. Santin.

Fast computation of orthonormal bases for RBF spaces through Krylov spaces methods preprint., 2014.

  • G. Fasshauer, M. J. McCourt.

Stable evaluation of Gaussian radial basis functions interpolants. SIAM J. Sci. Comput., 34: A737–A762, 2012.

  • A. C. Faul, M. J. D. Powell.

Krylov subspace methods for radial basis function interpolation. Chapman & Hall/CRC Res. Notes Math., 420 : 115–141, 1999. P . Novati, M. R. Russo. A GCV based Arnoldi-Tikhonov regularization method. BIT Numerical Analysis, DOI: 10.1007/S10543-013-0447-Z, 2013.

  • M. Pazouki, R. Schaback.

Bases for kernel-based spaces.

  • J. Comput. Appl. Math., 236 : 575–588, 2011.
  • M. Vianello and G. Da Fies.

Algebraic cubature on planar lenses and bubbles. Dolomites Res. Notes Approx. DRNA 15, 7–12, 2012.

33 of 33