Geometric Whitney problem and inverse problems Matti Lassas in - - PowerPoint PPT Presentation

geometric whitney problem and inverse problems
SMART_READER_LITE
LIVE PREVIEW

Geometric Whitney problem and inverse problems Matti Lassas in - - PowerPoint PPT Presentation

Geometric Whitney problem and inverse problems Matti Lassas in collaboration with Charles Fefferman, Sergei Ivanov, Yaroslav Kurylev Hariharan Narayanan Finnish Centre of Excellence in Inverse Modelling and Imaging 2018-2025 2018-2025


slide-1
SLIDE 1

Geometric Whitney problem and inverse problems

Matti Lassas

in collaboration with

Charles Fefferman, Sergei Ivanov, Yaroslav Kurylev Hariharan Narayanan

Finnish Centre of Excellence in Inverse Modelling and Imaging

2018-2025 2018-2025

slide-2
SLIDE 2

Outline:

◮ Classical and geometric Whitney problems ◮ Surface interpolation ◮ Riemannian manifolds in inverse problems and other

applications

◮ Manifold interpolation: Construction of a manifold from

distances with small errors

◮ Learning a manifold from distances with large random noise

slide-3
SLIDE 3

Whitney problem with errors

Let K ⊂ Rn be an arbitrary set, h : K → R, m ∈ Z+, and ε > 0. Does there exists a function F ∈ C m(Rn) such that sup

x∈K

|F(x) − h(x)| ≤ ε ? If such extension F exists, what is its optimal C m-norm?

slide-4
SLIDE 4

Problem A: Construction of a surface in Rd from a point cloud.

Assume that we are given a set X ⊂ Rd and n < d. When one can construct a smooth n-dimensional surface M ⊂ Rd that approximates X? How can the surface M can be constructed when X is given? Figures by Matlab and M. Rouhani.

slide-5
SLIDE 5

Problem B: Construction of a manifold from a discrete metric space.

Let (X, dX) be a metric space. We ask when there exists a Riemannian manifold (M, g) such that

◮ the curvature and injectivity radius of M are bounded, and ◮ X approximates well M in the Gromov-Hausdorff topology.

How can the manifold (M, g) be constructed when X is given?

slide-6
SLIDE 6

Unsolved extension problems

In the above problems a neighbourhood of the data points “covers” the whole manifold M (there are no holes). The following extension problem for metric space is unsolved: Let (X, dX) be a metric space. Is there a Riemannian manifold (M, g) such that X can be embedded isometricly in M? A special case is the boundary rigidity problem: Let ∂M be the boundary of a compact manifold and f : ∂M × ∂M → R. When we can construct a Riemannian metric g on M such that dist(M,g)(y1, y2) = f (y1, y2) for all y1, y2 ∈ ∂M?

slide-7
SLIDE 7

Example: Imaging of the interior of the Earth

Let M ⊂ R3 and

  • Fig. by Bozdag and Pugmire,

dg(x, y) = travel time of waves from x to y, x, y ∈ M. Inverse problem: Can we determine the metric g in M when we know dg(z1, z2) for z1, z2 ∈ ∂M, that is, the travel times of the earthquakes between the points on the surface of the Earth? When g = c(x)−2δjk and c(x) is close to 1, these data determine g uniquely (Burago-Ivanov 2010).

slide-8
SLIDE 8

Outline:

◮ Classical and geometric Whitney problems ◮ Surface interpolation ◮ Riemannian manifolds in inverse problems and other

applications

◮ Manifold interpolation: Construction of a manifold from

distances with small errors

◮ Learning a manifold from distances with large random noise

slide-9
SLIDE 9

Example: Manifold learning from point cloud data

Consider a data set X = {xj}N

j=1 ⊂ Rd.

The ISOMAP face data set contains N = 2370 images of faces with d = 2914 pixels. Question: Define dX (xj, xk) = |xj − xk|Rd using the Euclidean

  • distance. Can we find a submanifold of Rd that approximates X?
slide-10
SLIDE 10

Distance of two subsets

For a metric space Y and A ⊂ Y , the ε-neighborhood Uε(A) of A is Uε(A) = {y ∈ Y ; d(y, A) < ε}, ε > 0. We say that A is ε-dense in Y if Uε(A) = Y . For a metric space Y and sets A, B ⊂ Y , the Hausdorff distance between A and B in Y is dH(A, B) = max

  • sup

x∈A

d(x, B), sup

y∈B

d(y, A)

  • .
slide-11
SLIDE 11

Let E = Rd and BE

r (x) be the ball in E with center x and radius r.

Definition Let X ⊂ E, n ∈ Z+, and r, δ > 0. We say that X is δ-close to n-flats at scale r if for any x ∈ X, there exists an n-dimensional affine space Ax ⊂ E through x such that dH

  • X ∩ BE

r (x), Ax ∩ BE r (x)

  • ≤ δ.

Note: A bounded smooth n-surface in Rd is (Cr2)-close to n-flats in scale r.

slide-12
SLIDE 12

Surface interpolation

Theorem Let E be a separable Hilbert space, n ∈ Z+, r > 0, and δ < δ0(r, n). Suppose that X ⊂ E is δ-close to n-flats at scale r. Then there exists a closed (or complete) n-dimensional smooth submanifold M ⊂ E such that:

  • 1. dH(X, M) ≤ 5δ.
  • 2. The second fundamental form of M at every point is bounded

by Cnδr−2.

  • 3. The normal injectivity radius of M is at least r/3.

In particular, if δ < Cr2, the surface M has bounded curvature.

slide-13
SLIDE 13
slide-14
SLIDE 14

Algorithm SurfaceInterpolation: We consider the case r = 1 and assume that X ⊂ E = Rd is finite. We suppose that X is δ-close to n-flats at scale r. We implement the following steps:

  • 1. Construct a maximal

1 100-separated set X0 = {qi}k i=1 ⊂ X.

  • 2. For every point qi ∈ X0, let Ai ⊂ E be an affine subspace that

approximates X ∩ B1(qi) near qi. Let Pi : E → E be

  • rthogonal projectors onto Ai.
  • 3. Let ψ ∈ C ∞

0 ([− 1 2, 1 2]) be 1 in [0, 1 3] and ϕi : E → E be

ϕi(x) = µi(x)Pi(x) + (1 − µi(x))x, µi(x) = ψ(|x − qi|). Define f : E → E by f = ϕk ◦ ϕk−1 ◦ . . . ◦ ϕ1.

  • 4. Construct the image M = f (Uδ(X)).

The output is the n-dimensional surface M ⊂ E.

slide-15
SLIDE 15

Outline:

◮ Classical and geometric Whitney problems ◮ Surface interpolation ◮ Riemannian manifolds in inverse problems and other

applications

◮ Manifold interpolation: Construction of a manifold from

distances with small errors

◮ Learning a manifold from distances with large random noise

slide-16
SLIDE 16

Some earlier methods for manifold learning

Let {xj}J

j=1 ⊂ Rd be points on submanifold M ⊂ Rd, d > n. ◮ ‘Multi Dimensional Scaling’ (MDS) finds an embedding of

data points into Rm, n < m < d by minimising a cost function min

y1,...,yJ∈Rm J

  • j,k=1
  • yj − ykRm − djk
  • 2

, djk = xj − xkRd

◮ ‘Isomap’ makes a graph of the K nearest neighbours and

computes graph distances dG

jk that approximate distances

dM(xj, xk) along the surface. Then MDS is applied. Note that if there is F : M → Rm such that |F(x) − F(x′)| = dM(x, x′), then the curvature of M is zero.

Figure by Tenenbaum et al., Science 2000

slide-17
SLIDE 17

Construction of a manifold from discrete data.

Let (X, dX ) be a (discrete) metric space. We want to approximate it by a Riemannian manifold (M∗, g∗) so that

◮ (X, dX ) and (M∗, dg∗) are almost isometric, ◮ the curvature and the injectivity radius of M∗ are bounded.

Note that X is an “abstract metric space” and not a set of points in Rd, and we want to learn the intrinsic metric of the manifold.

slide-18
SLIDE 18

Distance of two metric spaces

Let (X, dX) and (Y , dY ) be (compact) metric spaces. Their Gromov-Hausdorff distance is dGH(X, Y ) = inf

Z {dH(X, Y ); (Z, dZ) is a metric space, X ⊂ Z, Y ⊂ Z}.

More practical definition: dGH(X, Y ) is the infimum of all ε > 0 for which there are ε-dense sequences (xj)J

j=1 ⊂ X and (yj)J j=1 ⊂ Y

such that |dX(xj, xk) − dY (yj, yk)| ≤ ε, for all j, k = 1, 2 . . . , J.

slide-19
SLIDE 19

Example 1: Non-Euclidean metric in data sets

Consider a data set X = {xj}N

j=1 ⊂ Rd.

The ISOMAP face data set contains N = 2370 images of faces with d = 2914 pixels. Question: Define dX (xj, xk) using Wasserstein distance related to

  • ptimal transport. Does (X, dX ) approximate a manifold and how

this manifold can be constructed?

slide-20
SLIDE 20

Example 2: Travel time distances of points

Surface waves produced by earthquakes travel near the boundary of the Earth. The observations of several earthquakes give information

  • n travel times dT(x, y) between the points x, y ∈ S2.

Question: Can one determine the Riemannian metric associated to surface waves from the travel times with measurement errors? Figure by Su-Woodward-Dziewonski, 1994

slide-21
SLIDE 21

Example 3: An inverse problem for a manifold

Consider a physical D ⊂ R3 with an unknown wave speed c(x). We can use boundary measurements to construct the distances dg(xj, xk) in a discrete set X = {xj ∈ M : j = 1, 2, . . . , N} (Belishev-Kurylev 1992, Bingham-Kurylev-L.-Siltanen 2008). The solution for Problem B gives a construction of a smooth Riemannian manifold from (X, dX). This Riemannian metric is close to the travel time metric g determined by c(x).

slide-22
SLIDE 22

Outline:

◮ Classical and geometric Whitney problems ◮ Surface interpolation ◮ Riemannian manifolds in inverse problems and other

applications

◮ Manifold interpolation: Construction of a manifold from

distances with small errors

◮ Ideas of the proofs and applications in geometry ◮ Learning a manifold from distances with large random noise

slide-23
SLIDE 23

Construction of a manifold from discrete data.

Let (X, dX ) be a (discrete) metric space. We aim to answer the question if there exists a Riemannian manifold (M∗, g∗) that approximates X so that

◮ dGH( (X , dX ), (M∗, dg∗) ) < ε, ◮ the curvature and the injectivity radius of M∗ are bounded.

Note that X is an “abstract metric space” and not a set of points in Rd, and we want to learn the intrinsic metric of the manifold.

slide-24
SLIDE 24

A local condition

Let BX

r (x) denote the ball of the metric space X and BRn r (0)

denote the ball of Rn. Definition Let X be a metric space, r > δ > 0, n ∈ Z+. We say that X is δ-close to Rn at scale r if, for any x ∈ X, dGH

  • BX

r (x) , BRn r (0)

  • < δ.

Note: A compact smooth n-manifold is (Cr3)-close Rn at scale r.

slide-25
SLIDE 25

A global condition

Definition Let X = (X, d) be a metric space and δ > 0. A δ-chain in X is a sequence x1, x2, . . . , xN ∈ X such that d(xj, xj+1) < δ for all j. A sequence x1, x2, . . . , xN ∈ X is said to be δ-straight if d(xi, xj) + d(xj, xk) < d(xi, xk) + δ for all 1 ≤ i < j < k ≤ N. We say that X is δ-intrinsic if for every pair of points x, y ∈ X there is a δ-straight δ-chain x1, . . . , xN with x1 = x and xN = y.

slide-26
SLIDE 26

Manifold learning

Theorem Let X be a metric space with a bounded diameter, n ∈ Z+, r > 0, and 0 < δ < δ0(r, n). Suppose that X is δ-intrinsic and δ-close to Rn at scale r. Then there exists a compact n-dimensional Riemannian manifold M such that

  • 1. X and M satisfy

dGH(X, M) < Cδr−1diam (X).

  • 2. The sectional curvature Sec(M) of M satisfies

| Sec(M)| ≤ Cδr−3.

  • 3. The injectivity radius of M is bounded below by r/2.

In particular, if δ < Cr3, the constructed manifold M has bounded curvature.

slide-27
SLIDE 27

Manifold learning

Theorem Let X be a metric space with a bounded diameter, n ∈ Z+, r > 0, and 0 < δ < δ0(r, n). Suppose that X is δ-intrinsic and δ-close to Rn at scale r. Then there exists a compact n-dimensional Riemannian manifold M such that

  • 1. X and M satisfy

dGH(X, M) < Cδr−1diam (X).

  • 2. The sectional curvature Sec(M) of M satisfies

| Sec(M)| ≤ Cδr−3.

  • 3. The injectivity radius of M is bounded below by r/2.

In particular, if δ < Cr3, the constructed manifold M has bounded curvature.

slide-28
SLIDE 28

Rough idea of the proof of manifold interpolation

slide-29
SLIDE 29

Assume that we are given a finite metric space (X, d). We do the following steps:

  • 1. Select a maximal

r 100-separated set X0 = {qi}J i=1 ⊂ X.

  • 2. Choose disjoint balls Di = Br(pi) ⊂ Rn for i = 1, 2, . . . , J and

construct a δ-isometry fi : BX

r (qi) → Di.

  • 3. For all qi, qj ∈ X0 such that d(qi, qj) < r, find affine transition

maps Aij : Rn → Rn, such that |Aij(fi(x)) − fj(x)| < Cδ, for x ∈ BX

r (qi) ∩ BX r (qj).

When i = j, we define Ajj = Id.

slide-30
SLIDE 30
  • 4. Let Φ ∈ C ∞

0 (Rn) be 1 near zero, and Ω = i Di.

Define smooth indicator functions ψij(x) = Φ(Aij(x) − pj). Define a map Fj : Ω → Rn+1 as follows: For x ∈ Di = Br(pi), put Fj(x) =

  • ψij(x) · (Aij(x) − pj) , ψij(x)
  • ,

if d(qi, qj) < r, 0,

  • therwise.
  • 5. Denote E = Rm, m = (n + 1)J and define

F : Ω → E, F(x) = (Fj(x))J

j=1.

slide-31
SLIDE 31
  • 6. Construct the local patches Σi = F(Di) ⊂ E.
  • 7. Apply algorithm SurfaceInterpolation for the set

i Σi to

construct a surface M ⊂ E.

  • 8. Let PM be the normal projection on M.
  • 9. Construct a metric tensor g on M by pushing forward the

Euclidean metric ge on Di in the maps PM ◦ F and computing a weighted average of the obtained metric tensors. The output is the surface M ⊂ E and the metric g on it. Next we consider applications of the above theorem in reconstruction of an unknown manifold.

slide-32
SLIDE 32

Theorem (Fefferman, Ivanov, Kurylev, L., Narayanan 2015) Let 0 < δ < c1(n, K) and M be a compact n-dimensional manifold with | Sec(M)| ≤ K and inj(M) > 2(δ/K)1/3. Let X = {xj}N

j=1 be

δ-dense in M and d : X × X → R+ ∪ {0} satisfy | d(x, y) − dM(x, y)| ≤ δ, x, y ∈ X. Given the values d(xj, xk), j, k = 1, . . . , N, one can construct a compact n-dimensional Riemannian manifold (M∗, g∗) such that:

  • 1. There is a diffeomorphism F : M∗ → M satisfying

1 L ≤ dM(F(x), F(y)) dM∗(x, y) ≤ L, for x, y ∈ M∗, L = 1 + CnK 1/3δ 2/3.

  • 2. | Sec(M∗)| ≤ CnK.
  • 3. inj(M∗) ≥ min{(CnK)−1/2, (1 − CnK 1/3δ 2/3) inj(M)} .
slide-33
SLIDE 33

Outline:

◮ Classical and geometric Whitney problems ◮ Surface interpolation ◮ Riemannian manifolds in inverse problems and other

applications

◮ Manifold interpolation: Construction of a manifold from

distances with small errors

◮ Learning a manifold from distances with large random noise

slide-34
SLIDE 34

Random sample points and random errors

Manifolds with bounded geometry: Let n ≥ 2 be an integer, K > 0, D > 0, i0 > 0. Let (M, g) be a compact Riemannian manifold of dimension n such that i) SecML∞(M) ≤ K, (1) ii) diam (M) ≤ D, iii) inj (M) ≥ i0, We consider measurements in randomly sampled points: Let Xj, j = 1, 2, . . . , N be independently samples from probability distribution µ on M satisfying 0 < cmin ≤ dµ dVolg ≤ cmax.

slide-35
SLIDE 35

Definition Let Xj, j = 1, 2, . . . , N be independent, identically distributed (i.i.d.) random variables having distribution µ. Let σ > 0, β > 1 and ηjk be i.i.d. random variables satisfying Eηjk = 0, E(η2

jk) = σ2,

Ee|ηjk| = β. In particular, Gaussian noise satisfies these conditions. We assume that all random variables ηjk and Xj are independent. We consider noisy measurements Djk = dM(Xj, Xk) + ηjk.

slide-36
SLIDE 36

Definition Let Xj, j = 1, 2, . . . , N be independent, identically distributed (i.i.d.) random variables having distribution µ. Let σ > 0, β > 1 and ηjk be i.i.d. random variables satisfying Eηjk = 0, E(η2

jk) = σ2,

Ee|ηjk| = β. In particular, Gaussian noise satisfies these conditions. We assume that all random variables ηjk and Xj are independent. We consider noisy measurements Djk = dM(Xj, Xk) + ηjk.

s

slide-37
SLIDE 37

Definition Let Xj, j = 1, 2, . . . , N be independent, identically distributed (i.i.d.) random variables having distribution µ. Let σ > 0, β > 1 and ηjk be i.i.d. random variables satisfying Eηjk = 0, E(η2

jk) = σ2,

Ee|ηjk| = β. In particular, Gaussian noise satisfies these conditions. We assume that all random variables ηjk and Xj are independent. We consider noisy measurements Djk = dM(Xj, Xk) + ηjk.

s s

slide-38
SLIDE 38

Definition Let Xj, j = 1, 2, . . . , N be independent, identically distributed (i.i.d.) random variables having distribution µ. Let σ > 0, β > 1 and ηjk be i.i.d. random variables satisfying Eηjk = 0, E(η2

jk) = σ2,

Ee|ηjk| = β. In particular, Gaussian noise satisfies these conditions. We assume that all random variables ηjk and Xj are independent. We consider noisy measurements Djk = dM(Xj, Xk) + ηjk.

s s s

slide-39
SLIDE 39

Definition Let Xj, j = 1, 2, . . . , N be independent, identically distributed (i.i.d.) random variables having distribution µ. Let σ > 0, β > 1 and ηjk be i.i.d. random variables satisfying Eηjk = 0, E(η2

jk) = σ2,

Ee|ηjk| = β. In particular, Gaussian noise satisfies these conditions. We assume that all random variables ηjk and Xj are independent. We consider noisy measurements Djk = dM(Xj, Xk) + ηjk.

s s s s

slide-40
SLIDE 40

Definition Let Xj, j = 1, 2, . . . , N be independent, identically distributed (i.i.d.) random variables having distribution µ. Let σ > 0, β > 1 and ηjk be i.i.d. random variables satisfying Eηjk = 0, E(η2

jk) = σ2,

Ee|ηjk| = β. In particular, Gaussian noise satisfies these conditions. We assume that all random variables ηjk and Xj are independent. We consider noisy measurements Djk = dM(Xj, Xk) + ηjk.

s s s s s

slide-41
SLIDE 41

Theorem (Fefferman, Ivanov, L., Narayanan 2019) Let n ≥ 3, D, K, i0, cmin, cmax, σ, β > 0 be given. Then there are δ0, C0 and C1 such that the following holds: Let δ ∈ (0, δ0), θ ∈ (0, 1

2) and (M, g) be a compact manifold satisfying bounds (1).

Then with a probability 1 − θ, σ2 and the noisy distances Djk = dM(Xj, Xk) + ηjk, j, k ≤ N of N randomly chosen points, where N ≥ C0 1 δ3n

  • log(1

θ) + log(1 δ )

  • ,

determine a Riemannian manifold (M∗, g∗) such that

  • 1. There is a diffeomorphism F : M∗ → M satisfying

1 L ≤ dM(F(x), F(y)) dM∗(x, y) ≤ L, for all x, y ∈ M∗, where L = 1 + C1δ.

  • 2. The sectional curvature SecM∗ of M∗ satisfies |SecM∗| ≤ C1K.
  • 3. The injectivity radius inj(M∗) of M∗ is close to inj(M).
slide-42
SLIDE 42

Theorem (Fefferman, Ivanov, L., Narayanan 2019) Let n ≥ 3, D, K, i0, cmin, cmax, σ, β > 0 be given. Then there are δ0, C0 and C1 such that the following holds: Let δ ∈ (0, δ0), θ ∈ (0, 1

2) and (M, g) be a compact manifold satisfying bounds (1).

Then with a probability 1 − θ, σ2 and the noisy distances Djk = dM(Xj, Xk) + ηjk, j, k ≤ N of N randomly chosen points, where N ≥ C0 1 δ3n

  • log(1

θ) + log(1 δ )

  • ,

determine a Riemannian manifold (M∗, g∗) such that

  • 1. There is a diffeomorphism F : M∗ → M satisfying

1 L ≤ dM(F(x), F(y)) dM∗(x, y) ≤ L, for all x, y ∈ M∗, where L = 1 + C1δ.

  • 2. The sectional curvature SecM∗ of M∗ satisfies |SecM∗| ≤ C1K.
  • 3. The injectivity radius inj(M∗) of M∗ is close to inj(M).
slide-43
SLIDE 43

For z ∈ M, let rz : M → R be the distance function from z, rz(x) = dM(z, x), x ∈ M. For y, z ∈ M, we consider the “rough distance function” κ(y, z) = ry − rz2

L2(M) =

  • M

|dM(y, x) − dM(z, x)|2dµ(x). Lemma There is a constant c0 ∈ (0, 1) such that c2

0dM(y, z)2 ≤ ry − rz2 L2(M,dµ) ≤ dM(y, z)2,

y, z ∈ M. That is, the map R : z → rz is a bi-Lipschitz embedding R : M → R(M) ⊂ L2(M). y

s

z

s

x s

slide-44
SLIDE 44

Lemma (Hoeffding’s inequality) Let Z1, . . . , ZN be N independent, identically distributed copies of the random variable Z whose range is [0, 1]. Then, for ε > 0, we have P

  • 1

N (

N

  • j=1

Zj) − EZ

  • ≤ ε
  • ≥ 1 − 2 exp(−2Nε2).
slide-45
SLIDE 45

We consider three sets S1, S2, S3 ⊂ {Xj}, where Ni = #Si satisfy N1 > N2 > N3. We call S1 = {X1, . . . , XN1} the densest net, S2 the medium dense net and S3 the coarse net. We give an algorithm to construct (M∗, g∗) from noisy data. Step 1: For Xj, Xk ∈ S2 are in the “medium dense net”, we compute κapp(Xj, Xk) = 1 N1

N1

  • ℓ=1

|Djℓ − Dkℓ|2 − 2σ2, where we take a sum over the “densest net” S1. Xj Xk Xℓ

r

slide-46
SLIDE 46

Denote κ(Xj, Xk) = rXj − rXk2

L2(M). A simple calculation shows

E

  • |Djℓ − Dkℓ|2
  • Xj, Xk
  • = rXj − rXk2

L2(M) + 2σ2.

We recall that for Xj, Xk ∈ S2, κapp(Xj, Xk) = 1 N1

N1

  • ℓ=1

|Djℓ − Dkℓ|2 − 2σ2 Thus Hoeffding’s inequality yields the following: Lemma Let L > D + 1 and ε > 0. If |ηjk| < L almost surely, then P

  • κapp(Xj, Xk) − κ(Xj, Xk)
  • ≤ ε
  • ≥ 1 − 2 exp(−1

8N1L−4ε2).

slide-47
SLIDE 47

Recall that function κ(y, z) = ry − rz2

L2(M) ≈ κapp(y, z) is a

rough distance function: c2

0dM(y, z)2 ≤ κ(y, z) ≤ dM(y, z)2.

Let W (y, ρ) and Wapp(y, ρ) be the sets W (y, ρ) = {z ∈ M : κ(y, z) < ρ2}, Wapp(y, ρ) = {z ∈ M : κapp(y, z) < ρ2}. We have BM(y, 1

c0 ρ) ⊂ W (y, ρ) ⊂ BM(y, ρ).

slide-48
SLIDE 48

For y1, y2 ∈ M, we define the averaged distances dρ(y1, y2) = 1 µ(W (y1, ρ))

  • W (y1,ρ)

dM(z, y2) dµ(z). Step 2: For Xj, Xj′ ∈ S3, where S3 is the coarse net, compute dapp

ρ

(Xj, Xj′) = 1 #(S2 ∩ Wapp(Xj, ρ))

  • Xk∈S2∩Wapp(Xj,ρ)

Dkj′. There is δ1 = δ1(ρ, θ) such that P[ ∀Xj, Xj′ ∈ S3 : |dapp

ρ

(Xj, Xj′) − dM(Xj, Xj′)| < δ1] ≥ 1 − θ.

slide-49
SLIDE 49

Summarizing, for points S3 = {y1, y2, . . . , yN3} we find dapp

ρ

(yj, yj′) such that |dapp

ρ

(yj, yj′) − dM(yj, yj′)| < δ1 with a large probability. Step 3: Using the deterministic results with small errors we find a smooth manifold (M∗, g∗) using the net S3 and the approximate distance dapp

ρ

(y1, y2) of y1, y2 ∈ S3.

slide-50
SLIDE 50

Generalization with missing data

Recall that Djk = dM(Xj, Xk) + ηjk. We can assume that we are given D

(partial data)

jk

=

  • Djk

if Yjk = 1, ‘missing’ if Yjk = 0, where Yjk ∈ {0, 1} are independent random variables, P(Yjk = 1 | Xj, Xk) = Φ(Xj, Xk) and Φ : M × M → R is some (unknown) function such that there is a smooth non-increasing function h : [0, ∞) → [0, 1] so that c1 h(dM(x, y)) ≤ Φ(x, y) ≤ c2 h(dM(x, y)).

slide-51
SLIDE 51

Thank you for your attention!