On the Smallest Enclosing Riemannian Balls On Approximating the - - PowerPoint PPT Presentation

on the smallest enclosing riemannian balls
SMART_READER_LITE
LIVE PREVIEW

On the Smallest Enclosing Riemannian Balls On Approximating the - - PowerPoint PPT Presentation

On the Smallest Enclosing Riemannian Balls On Approximating the Riemannian 1-Center http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ Marc Arnaudon 1 Frank Nielsen 2 1 Universit e de Bordeaux, France 2 Ecole


slide-1
SLIDE 1

On the Smallest Enclosing Riemannian Balls

— On Approximating the Riemannian 1-Center —

http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/

Marc Arnaudon1 Frank Nielsen2

1Universit´

e de Bordeaux, France

Ecole Polytechnique & Sony CSL e-mail: Frank.Nielsen@acm.org

Computational Geometry 46(1): 93-104 (2013) arXiv 1101.4718

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 1/39

slide-2
SLIDE 2

Introduction: Euclidean Smallest Enclosing Balls

Given d-dimensional P = {p1, ..., pn}, find the “smallest” (with respect to the volume ≡ radius ≡ inclusion) ball B = Ball(c, r) fully covering P: c∗ = min

c∈Rd n

max

i=1 c − pi. ◮ unique Euclidean circumcenter c∗, SEB [19]. ◮ optimization problem non-differentiable [10]

c∗ lie on the farthest Voronoi diagram

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 2/39

slide-3
SLIDE 3

Euclidean smallest enclosing balls (SEBs)

◮ 1857: d = 2, Smallest Enclosing Ball? of P = {p1, ..., pn}

(Sylvester [16])

◮ Randomized expected linear time algorithm [19, 5] in fixed

dimension (but hidden constant exponential in d)

◮ Core-set [3] approximation: (1 + ǫ)-approximation in

O(dn

ǫ2 )-time in arbitrary dimension, O(dn ǫ + 1 ǫ4.5 log 1 ǫ) [7] ◮ Many other algorithms and heuristics [14, 9, 17], etc.

SEB also known as Minimum Enclosing Ball (MEB), minimax center, 1-center, bounding (hyper)sphere, etc. → Applications in computer graphics (collision detection with ball cover proxies [15]), in machine learning (Core Vector Machines [18]), etc.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 3/39

slide-4
SLIDE 4

Optimization and core-sets [3]

Let c(P) denote the circumcenter of the SEB and r(P) its radius Given ǫ > 0, ǫ-core-set C ⊂ P, such that P ⊆ Ball(c(C), (1 + ǫ)r(C)) ⇔ Expanding SEB(C) by 1 + ǫ fully covers P Core-set of optimal size ⌈1

ǫ⌉, independent of the dimension d,

and n. Note that combinatorial basis for SEB is from 2 to d + 1 [19]. → Core-sets find many applications for problems in large-dimensions.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 4/39

slide-5
SLIDE 5

Euclidean SEBs from core-sets [2]

B˘ adoiu-Clarkson algorithm based on core-sets [2, 3]: BCA:

◮ Initialize the center c1 ∈ P = {p1, ..., pn}, and ◮ Iteratively update the current center using the rule

ci+1 ← ci + fi − ci i + 1 where fi denotes the farthest point of P to ci: fi = ps, s = arg maxn

j=1ci − pj

⇒ gradient-descent method ⇒ (1 + ǫ)-approximation after ⌈ 1

ǫ2⌉ iterations: O(dn ǫ2 ) time

⇒ Core-set: f1, ..., fl with l = ⌈ 1

ǫ2⌉

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 5/39

slide-6
SLIDE 6

Euclidean SEBs from core-sets: Rewriting with #

a#tb: point (1 − t)a + tb = a + t(b − a) on the line segment [ab]. D(x, y) = x − y2, D(x, P) = miny∈P D(x, y) Algorithm 1: BCA(P, l). c1 ← choose randomly a point in P; for i = 2 to l − 1 do // farthest point from ci si ← arg maxn

j=1D(ci, pj);

// update the center: walk on the segment [ci, psi ] ci+1 ← ci# 1

i+1 psi ;

end // Return the SEB approximation return Ball(cl, r 2

l = D(cl, P)) ;

⇒ (1 + ǫ)-approximation after l = ⌈ 1

ǫ2 ⌉ iterations.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 6/39

slide-7
SLIDE 7

Bregman divergences (incl. squared Euclidean distance)

SEB extended to Bregman divergences BF(· : ·) [13] BF(c : x) = F(c) − F(x) − c − x, ∇F(x), BF(c : X) = minx∈X BF(c : x)

F q p ˆ p ˆ q Hq H′

q

BF (p, q) = Hq − H′

q

⇒ Bregman divergence = remainder of a first order Taylor expansion.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 7/39

slide-8
SLIDE 8

Smallest enclosing Bregman ball [13]

F ∗ = convex conjugate of F with (∇F)−1 = ∇F ∗ Algorithm 2: MBC(P, l). // Create the gradient point set (η-coordinates) P′ ← {∇F(p) : p ∈ P}; g ← BCA(P′, l); return Ball(cl = ∇F −1(c(g)), rl = BF(cl : P)) ; Guaranteed approximation algorithm with approximation factor depending on

1 minx∈X ∇2F(x), ... but poor in practice

∀s, SF(x; ∇F −1(c(g))) ≤ (1 + ǫ)2r ′∗ minx∈X ∇2F(x) with SF(c; x) = BF(c : x) + BF(x : c)

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 8/39

slide-9
SLIDE 9

Smallest enclosing Bregman ball [13]

A better approximation in practice... Algorithm 3: BBCA(P, l). c1 ← choose randomly a point in P; for i = 2 to l − 1 do // farthest point from ci wrt. BF si ← arg maxn

j=1BF(ci : pj);

// update the center: walk on the η-segment [ci, psi ]η ci+1 ← ∇F −1(∇F(ci)# 1

i+1∇F(psi)) ;

end // Return the SEBB approximation return Ball(cl, rl = BF(cl : X)) ; θ-, η-geodesic segments in dually flat geometry.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 9/39

slide-10
SLIDE 10

Basics of Riemannian geometry

◮ (M, g): Riemannian manifold ◮ ·, ·, Riemannian metric tensor g: definite positive bilinear

form on each tangent space TxM (depends smoothly on x)

◮ · x: u = u, u1/2: Associated norm in TxM ◮ ρ(x, y): metric distance between two points on the manifold

M (length space)

ρ(x, y) = inf 1 ˙ ϕ(t) dt, ϕ ∈ C 1([0, 1], M), ϕ(0) = x, ϕ(1) = y

  • Parallel transport wrt. Levi-Civita metric connection ∇: ∇g = 0.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 10/39

slide-11
SLIDE 11

Basics of Riemannian geometry: Exponential map

◮ Local map from the tangent space TxM to the manifold

defined with geodesics (wrt ∇). ∀x ∈ M, D(x) ⊂ TxM : D(x) = {v ∈ TxM : γv(1) is defined} with γv maximal (i.e., largest domain) geodesic with γv(0) = x and γ′

v(0) = v. ◮ Exponential map:

expx(·) : D(x) ⊆ TxM → M expx(v) = γv(1) D is star-shaped.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 11/39

slide-12
SLIDE 12

Basics of Riemannian geometry: Geodesics

◮ Geodesic: smooth path which locally minimizes the distance

between two points. (In general such a curve does not minimize it globally.)

◮ Given a vector v ∈ TxM with base point x, there is a unique

geodesic started at x with speed v at time 0: t → expx(tv) or t → γt(v).

◮ Geodesic on [a, b] is minimal if its length is less or equal to

  • thers. For complete M (i.e., expx(v)), taking x, y ∈ M, there

exists a minimal geodesic from x to y in time 1. γ·(x, y) : [0, 1] → M, t → γt(x, y) with the conditions γ0(x, y) = x and γ1(x, y) = y.

◮ U ⊆ M is convex if for any x, y ∈ U, there exists a unique

minimal geodesic γ·(x, y) in M from x to y. Geodesic fully lies in U and depends smoothly on x, y, t.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 12/39

slide-13
SLIDE 13

Basics of Riemannian geometry: Geodesics

◮ Geodesic γ(x, y): locally minimizing curves linking x to y ◮ Speed vector γ′(t) parallel along γ:

Dγ′(t) dt = ∇γ′(t)γ′(t) = 0

◮ When manifold M embedded in Rd, acceleration is normal to

tangent plane: γ′′(t) ⊥ Tγ(t)M

◮ γ′(t) = c, a constant (say, unit).

⇒ Parameterization of curves with constant speed...

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 13/39

slide-14
SLIDE 14

Basics of Riemannian geometry: Geodesics

Constant speed geodesic γ(t) so that γ(0) = x and γ(ρ(x, y)) = y (constant speed 1, the unit of length). x#ty = m = γ(t) : ρ(x, m) = t × ρ(x, y) For example, in the Euclidean space: x#ty = (1 − t)x + ty = x + t(y − x) = m ρE(x, m) = t(y − x) = ty − x = t × ρ(x, y), t ∈ [0, 1] ⇒ m interpreted as a mean (barycenter) between x and y.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 14/39

slide-15
SLIDE 15

Basics of Riemannian geometry: Injectivity radius

Diffeomorphism from the tangent space to the manifold

◮ Injectivity radius inj(M): largest r > 0 such that for all

x ∈ M, the map expx(·) restricted to the open ball in TxM with radius r is an embedding.

◮ Global injectivity radius: infimum of the injectivity radius over

all points of the manifold.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 15/39

slide-16
SLIDE 16

Basics of Riemannian geometry: Sectional curvature

Given x ∈ M, u, v two non collinear vectors in TxM, the sectional curvature Sect(u, v) = K is a number which gives information on how the geodesics issued from x behave near x. More precisely, the image by expx(·) of the circle centered at 0 of radius r > 0 in Span(u, v) has length 2πSK(r) + o(r 3) as r → 0 with SK(r) =       

sin( √ Kr) √ K

if K > 0, r if K = 0,

sinh(√−Kr) √−K

if K < 0. positive, zero or negative curvatures...

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 16/39

slide-17
SLIDE 17

Basics of Riemannian geometry: Alexandrov’s theorem

Given an upper bound α2 for sectional curvatures, compare geodesic triangles by Alexandrov theorem: Let x1, x2, x3 ∈ M satisfy x1 = x2, x1 = x3 and ρ(x1, x2) + ρ(x2, x3) + ρ(x3, x1) < 2 min

  • inj(M), π

α

  • where α > 0 is such that α2 is an upper bound of sectional
  • curvatures. Let the minimizing geodesic from x1 to x2 and the

minimizing geodesic from x1 to x3 make an angle θ at x1. Denoting by S2

α2 the 2-dimensional sphere of constant curvature

α2 (hence of radius 1/α) and ˜ ρ the distance in S2

α2, we consider

points ˜ x1, ˜ x2, ˜ x3 ∈ S2

α2 such that ρ(x1, x2) = ˜

ρ(˜ x1, ˜ x2), ρ(x1, x3) = ˜ ρ(˜ x1, ˜ x3). Assume that the minimizing geodesic from ˜ x1 to ˜ x2 and the minimizing geodesic from ˜ x1 to ˜ x3 also make an angle θ at ˜ x1. Then we have: ρ(x2, x3) ≥ ˜ ρ(˜ x2, ˜ x3) .

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 17/39

slide-18
SLIDE 18

Basics of Riemannian geometry: Topogonov’s theorem

Assume β > 0 is such that −β2 is a lower bound for sectional curvatures in M. Let x1, x2, x3 ∈ M satisfy x1 = x2, x1 = x3. Let the minimizing geodesic from x1 to x2 and the minimizing geodesic from x1 to x3 make an angle θ at x1. Denoting by H2

−β2 the

hyperbolic 2-dimensional space of constant curvature −β2 and ˜ ρ the distance in H2

−β2, we consider points ˜

x1, ˜ x2, ˜ x3 ∈ H2

−β2 such

that ρ(x1, x2) = ˜ ρ(˜ x1, ˜ x2), ρ(x1, x3) = ˜ ρ(˜ x1, ˜ x3). Assume that the minimizing geodesic from ˜ x1 to ˜ x2 and the minimizing geodesic from ˜ x1 to ˜ x3 also make an angle θ at ˜ x1. Then we have: ρ(x2, x3) ≤ ˜ ρ(˜ x2, ˜ x3) .

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 18/39

slide-19
SLIDE 19

Basics of Riemannian geometry: First law of cosines

In spherical/hyperbolic geometries:

◮ If θ1, θ2, θ3 are the angles of a triangle in S2 α2 and l1, l2, l3 are

the lengths of the opposite sides, then cos θ3 = cos(αl3) − cos(αl1) cos(αl2) sin(αl1) sin(αl2)

◮ If θ1, θ2, θ3 are the angles of a triangle in H2 −β2 and l1, l2, l3

are the lengths of the opposite sides, then cos θ3 = cosh(βl1) cosh(βl2) − cosh(βl3) sinh(βl1) sinh(βl2)

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 19/39

slide-20
SLIDE 20

Now ready for the “Smallest enclosing Riemannian ball”

(M, g): complete Riemannian manifold ν: probability measure on M ρ(x, y): Riemannian metric distance Assume the measure support supp(ν) ⊆ in a geodesic ball B(o, R). f : M → R: measurable function f L∞(ν) = inf {a > 0, ν ({y ∈ M, |f (y)| > a}) = 0} . α > 0 such that α2 upper bounds the sectional curvatures in M. Rα = 1 2 min

  • inj(M), π

α

  • inj(M): injectivity radius

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 20/39

slide-21
SLIDE 21

Riemannian SEB: Existence and uniqueness [1]

Assume R < Rα Consider farthest point map: H : M → [0, ∞] x → ρ(x, ·)L∞(ν) (1) c ∈ B(o, R). → c ⊂ CH(supp(ν)) [1] (convex hull) ⇒ center: notion of centrality of the measure ⇒ point set: discrete measure, center → circumcenter

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 21/39

slide-22
SLIDE 22

Example of Riemannian manifold: SPD space

Space of Symmetric Positive Definite (SPD) matrices with

◮ Riemannian distance:

ρ(P, Q) = log(P−1Q)F =

  • d
  • i=1

log2 λi where λi are the eigenvalues of matrix P−1Q.

◮ Non-compact Riemannian symmetric space of non-positive

curvature (aka. Cartan-Hadamard manifold).

◮ Any measure ν with bounded support satisfies R < Rα

(choose α > 0). ⇒ Minimizer c of farthest point map H exists and is unique: 1-center or minimax center of ν.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 22/39

slide-23
SLIDE 23

Generalizing BCA to Riemannian manifolds

GeoA:

◮ Initialize the center with c1 ∈ P, and ◮ Iteratively update the current minimax center as

ci+1 = Geodesic

  • ci, fi,

1 i + 1

  • where fi denotes the farthest point of P to ci, and

Geodesic(p, q, t) denotes the intermediate point m on the geodesic passing through p and q such that ρ(p, m) = t × ρ(p, q).

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 23/39

slide-24
SLIDE 24

Generalizing BCA to Riemannian manifolds

a#M

t b: point γ(t) on the geodesic line segment [ab] wrt M.

Algorithm 4: GeoA c1 ← choose randomly a point in P; for i = 2 to l do // farthest point from ci si ← arg maxn

j=1ρ(ci, pj);

// update the center: walk on the geodesic line segment [ci, psi] ci+1 ← ci#M

1 i+1psi;

end // Return the SEB approximation return Ball(cl, rl = ρ(cl, P)) ;

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 24/39

slide-25
SLIDE 25

Proof sketch

Assume supp(ν) ⊂ B(o, R) and R < Rα = 1 2 min

  • inj(M), π

α

  • with α > 0 such that α2 is an upper bound for the sectional

curvatures in M.

Lemma

There exists τ > 0 such that for all x ∈ B(o, R), H(x) − H(c) ≥ τρ2(x, c)

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 25/39

slide-26
SLIDE 26

Stochastic approximation for measures

For x ∈ B(o, R), t → γt(v(x, ν)) a unit speed geodesic from γ0(v(x, ν)) = x to one point y = γH(x)(v(x, ν)) in supp(ν) which realizes the maximum of the distance from x to supp(ν). v = 1 H(x) exp−1

x (y)

RieA: Fix some δ > 0.

◮ Step 1 Choose a starting point x0 ∈ supp(ν) and let

k = 0

◮ Step 2 Choose a step size tk+1 ∈ (0, δ] and let

xk+1 = γtk+1(v(xk, ν)), then do again step 2 with k ← k + 1.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 26/39

slide-27
SLIDE 27

Convergence theorem for RieA

a ∧ b: minimum operator a ∧ b = min(a, b). R0 = Rα − R 2 ∧ R 2 . Assume α, β > 0 are such that −β2 is a lower bound and α2 an upper bound of the sectional curvatures in M. If the step sizes (tk)k≥1 satisfy δ ≤ R0 2 ∧ 2 β arctanh (tanh(βR0/2) cos(αR) tan(αR0/4)) , lim

k→∞ tk = 0, ∞

  • k=1

tk = +∞ and

  • k=1

t2

k < ∞.

then the sequence (xk)k≥1 generated by the algorithm satisfies lim

k→∞ ρ(xk, c) = 0

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 27/39

slide-28
SLIDE 28

Case study I: Hyperbolic planar manifold

In Klein disk (projective model), geodesics are straight (euclidean) lines [11]. ρ(p, q) = arccosh 1 − p⊤q

  • (1 − p⊤p)(1 − q⊤q)

where arccosh(x) = log(x + √ x2 − 1). Here, we choose non-constant speed curve parameterization (not constant-speed geodesic): ˜ γt(p, q) = (1 − t)p + tq, t ∈ [0, 1]. ⇒ Implement a dichotomy on ˜ γt(p, q) to get #t.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 28/39

slide-29
SLIDE 29

Initialization First iteration Second iteration Third iteration Fourth iteration after 104 iterations

http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/ c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 29/39

slide-30
SLIDE 30

Performance

0.02 0.04 0.06 0.08 0.1 0.12 0.14 50 100 150 200 Klein distance between current center and minimax center "expe1.dat" using 1:2 0.5 0.55 0.6 0.65 0.7 0.75 0.8 50 100 150 200 Radius of the smallest enclosing Klein ball anchored at current center "expe1.dat" using 1:3

(a) (b) Convergence rate of the GeoA algorithm for the hyperbolic disk for the first 200 iterations. Horizontal axis: number of iterations Vertical axis: (a) the relative Klein distance between the current center and the optimal 1-center, (b) the radius of the smallest enclosing ball anchored at the current center.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 30/39

slide-31
SLIDE 31

Case study II: Space of SPD matrices

◮ d × d matrix M Symmetric Positive Definite (SPD) ⇔

M = M⊤ and that for all x = 0, x⊤Mx > 0.

◮ The set of d × d SPD matrices: manifold of dimension d(d+1) 2

[8]

◮ The geodesic linking (matrix) point P to point Q:

γt(P, Q) = P

1 2

  • P− 1

2 QP− 1 2

t P

1 2

where the matrix function h(M) is computed from the singular value decomposition M = UDV ⊤ (with U and V unitary matrices and D = diag(λ1, ..., λd) a diagonal matrix

  • f eigenvalues) as h(M) = Udiag(h(λ1), ..., h(λd))V ⊤. For

example, the square root function of a matrix is computed as M

1 2 = U diag(√λ1, ..., √λd) V ⊤. c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 31/39

slide-32
SLIDE 32

SPD space: Splitting the geodesic for operator #t

In this case, finding t such that log(P−1Q)t2

F = r log P−1Q2 F ,

(2) where · F denotes the Fr¨

  • benius norm yields to t = r. Indeed,

consider λ1, ..., λd the eigenvalues of P−1Q, then ρ(P, Q) = log(P−1Q)F =

  • i log2 λi amounts to find

d

  • i=1

log2 λt

i = t2 d

  • i=1

log2 λi = r 2

d

  • i=1

log2 λi. That is t = r.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 32/39

slide-33
SLIDE 33

Case study II: Performance

0.1 0.2 0.3 0.4 0.5 0.6 0.7 50 100 150 200 Riemannian distance between current SPD center and minimax SPD center "expSPD1.dat" using 1:2 12 12.5 13 13.5 14 14.5 15 15.5 16 50 100 150 200 Radius of the smallest enclosing Riemannian ball anchored at current SPD center "expSPD1.dat" using 1:3

(a) (b) Convergence rate of the GeoA algorithm for the SPD Riemannian manifold (dimension 5) for the first 200 iterations. Horizontal axis: number of iterations i Vertical axis:

◮ (a) the relative Riemannian distance between the current

center ci and the optimal 1-center c∗ (ρ(c∗,ci )

r∗

)

◮ (b) the radius ri of the smallest enclosing SPD ball anchored

at the current center.

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 33/39

slide-34
SLIDE 34

Remark on SPD spaces and hyperbolic geometry

◮ 2D SPD(2) matrix space has dimension d = 3: A positive

cone.

  • (a, b, c) : a > 0,

ab − c2 > 0

  • ◮ Can be peeled into sheets of dimension 2, each sheet

corresponding to a constant value of the determinant of the elements [4] SPD(2) = SSPD(2) × R+, where SSPD(2) = {a, b, c = √ 1 − ab) : a > 0, ab − c2 = 1}

◮ Map to (x0 = a+b 2

≥ 1, x1 = a−b

2 , x2 = c) in hyperboloid

model [12], and z = a−b+2ic

2+a+b in Poincar´

e disk [12].

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 34/39

slide-35
SLIDE 35

Conclusion: Smallest Riemannian Enclosing Ball

◮ Generalize Euclidean 1-center algorithm of [2] to Riemannian

geometry

◮ Proved the convergence under mild assumptions (for

measures/point sets)

◮ Existence of Riemannian core-sets for optimization ◮ 1-center building block for k-center clustering [6] ◮ can be extended to sets of Riemannian (geodesic) balls

Reproducible research codes with interactive demos:

http://www.sonycsl.co.jp/person/nielsen/infogeo/RiemannMinimax/

c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 35/39

slide-36
SLIDE 36

Bibliographic references I

Bijan Afsari. Riemannian Lp center of mass : existence, uniqueness, and convexity. Proceedings of the American Mathematical Society, 139:655–674, February 2011. Mihai B˘ adoiu and Kenneth L. Clarkson. Smaller core-sets for balls. In SODA ’03: Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, pages 801–802, Philadelphia, PA, USA, 2003. Society for Industrial and Applied Mathematics. Mihai B˘ adoiu and Kenneth L. Clarkson. Optimal core-sets for balls. Computational Geometry: Theory and Applications, 40:14–22, May 2008. Pascal Chossat and Olivier P. Faugeras. Hyperbolic planforms in relation to visual edges and textures perception. PLoS Computational Biology, 5(12), 2009. Kaspar Fischer and Bernd G¨ artner. The smallest enclosing ball of balls: combinatorial structure and algorithms.

  • Int. J. Comput. Geometry Appl., 14(4-5):341–378, 2004.

Teofilo F. Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38(0):293 – 306, 1985. c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 36/39

slide-37
SLIDE 37

Bibliographic references II

Piyush Kumar, Joseph S. B. Mitchell, and E. Alper Yildirim. Approximate minimum enclosing balls in high dimensions using core-sets. ACM Journal of Experimental Algorithmics, 8, 2003. Serge Lang. Fundamentals of differential geometry, volume 191 of Graduate Texts in Mathematics. Springer-Verlag, New York, 1999. Thomas Larsson. Fast and tight fitting bounding spheres. In Proceedings of the Annual SIGRAD Conference, Stockholm, 2008. Frank Nielsen and Richard Nock. Approximating smallest enclosing balls with applications to machine learning.

  • Int. J. Comput. Geometry Appl., 19(5):389–414, 2009.

Frank Nielsen and Richard Nock. Hyperbolic Voronoi diagrams made easy. In International Conference on Computational Science and its Applications (ICCSA), volume 1, pages 74–80, Los Alamitos, CA, USA, march 2010. IEEE Computer Society. c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 37/39

slide-38
SLIDE 38

Bibliographic references III

Frank Nielsen and Richard Nock. Visualizing hyperbolic Voronoi diagrams. In Proceedings of the Thirtieth Annual Symposium on Computational Geometry, SOCG’14, pages 90:90–90:91, New York, NY, USA, 2014. ACM. Richard Nock and Frank Nielsen. Fitting the smallest enclosing bregman balls. In 16th European Conference on Machine Learning (ECML), pages 649–656, October 2005. Jack Ritter. An efficient bounding sphere. In Graphics gems, pages 301–303. Academic Press Professional, Inc., 1990. Jonas Spillmann, Markus Becker, and Matthias Teschner. Efficient updates of bounding sphere hierarchies for geometrically deformable models. Journal of Visual Communication and Image Representation, 18(2):101–108, 2007.

  • J. J. Sylvester.

A question in the geometry of situation. Quarterly Journal of Pure and Applied Mathematics, 1:79, 1857. Bo Tian. Bouncing Bubble: A fast algorithm for Minimal Enclosing Ball problem. 2012. c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 38/39

slide-39
SLIDE 39

Bibliographic references IV

Ivor W. Tsang, Andras Kocsor, and James T. Kwok. Simpler core vector machines with enclosing balls. In Proceedings of the 24th International Conference on Machine Learning (ICML), pages 911–918, New York, NY, USA, 2007. ACM. Emo Welzl. Smallest enclosing disks (balls and ellipsoids). In H. Maurer, editor, New Results and New Trends in Computer Science, LNCS. Springer, 1991. c 2013-14 Frank Nielsen, ´ Ecole Polytechnique & Sony Computer Science Laboratories 39/39