Moment methods in energy minimization David de Laat CWI Amsterdam - - PowerPoint PPT Presentation

moment methods in energy minimization
SMART_READER_LITE
LIVE PREVIEW

Moment methods in energy minimization David de Laat CWI Amsterdam - - PowerPoint PPT Presentation

Moment methods in energy minimization David de Laat CWI Amsterdam Andrejewski-Tage Moment problems in theoretical physics Konstanz, 9 April 2016 Packing and energy minimization Energy minimization Sphere packing Thomson problem (1904)


slide-1
SLIDE 1

Moment methods in energy minimization

David de Laat

CWI Amsterdam

Andrejewski-Tage Moment problems in theoretical physics Konstanz, 9 April 2016

slide-2
SLIDE 2

Packing and energy minimization

Sphere packing Spherical cap packing Energy minimization Kepler conjecture (1611) Tammes problem (1930) Thomson problem (1904)

slide-3
SLIDE 3

Packing and energy minimization

Sphere packing Spherical cap packing Energy minimization Kepler conjecture (1611) Tammes problem (1930) Thomson problem (1904)

◮ Typically difficult to prove optimality of constructions

slide-4
SLIDE 4

Packing and energy minimization

Sphere packing Spherical cap packing Energy minimization Kepler conjecture (1611) Tammes problem (1930) Thomson problem (1904)

◮ Typically difficult to prove optimality of constructions ◮ This talk: Methods to find obstructions

slide-5
SLIDE 5

The maximum independent set problem

Example: the Petersen graph

slide-6
SLIDE 6

The maximum independent set problem

Example: the Petersen graph

slide-7
SLIDE 7

The maximum independent set problem

Example: the Petersen graph

◮ In general difficult to solve to optimality (NP-hard)

slide-8
SLIDE 8

The maximum independent set problem

Example: the Petersen graph

◮ In general difficult to solve to optimality (NP-hard) ◮ The Lov´

asz ϑ-number upper bounds the independence number

slide-9
SLIDE 9

The maximum independent set problem

Example: the Petersen graph

◮ In general difficult to solve to optimality (NP-hard) ◮ The Lov´

asz ϑ-number upper bounds the independence number

◮ Efficiently computable through semidefinite programming

slide-10
SLIDE 10

The maximum independent set problem

Example: the Petersen graph

◮ In general difficult to solve to optimality (NP-hard) ◮ The Lov´

asz ϑ-number upper bounds the independence number

◮ Efficiently computable through semidefinite programming ◮ Semidefinite program: optimize a linear functional over the

intersection of an affine space with the cone of n × n positive semidefinite matrices

slide-11
SLIDE 11

The maximum independent set problem

Example: the Petersen graph

◮ In general difficult to solve to optimality (NP-hard) ◮ The Lov´

asz ϑ-number upper bounds the independence number

◮ Efficiently computable through semidefinite programming ◮ Semidefinite program: optimize a linear functional over the

intersection of an affine space with the cone of n × n positive semidefinite matrices 3 × 3 positive semidefinite matrices with unit diagonal:

slide-12
SLIDE 12

Model packing problems as independent set problems

slide-13
SLIDE 13

Model packing problems as independent set problems

◮ Example: the spherical cap packing problem

slide-14
SLIDE 14

Model packing problems as independent set problems

◮ Example: the spherical cap packing problem

◮ As vertex set we take the unit sphere

slide-15
SLIDE 15

Model packing problems as independent set problems

◮ Example: the spherical cap packing problem

◮ As vertex set we take the unit sphere ◮ Two distinct vertices x and y are adjacent if the spherical caps

centered about x and y intersect in their interiors:

x y

slide-16
SLIDE 16

Model packing problems as independent set problems

◮ Example: the spherical cap packing problem

◮ As vertex set we take the unit sphere ◮ Two distinct vertices x and y are adjacent if the spherical caps

centered about x and y intersect in their interiors:

x y

◮ Optimal density is proportional to the independence number

slide-17
SLIDE 17

Model packing problems as independent set problems

◮ Example: the spherical cap packing problem

◮ As vertex set we take the unit sphere ◮ Two distinct vertices x and y are adjacent if the spherical caps

centered about x and y intersect in their interiors:

x y

◮ Optimal density is proportional to the independence number ◮ ϑ generalizes to an infinite dimensional maximization problem

slide-18
SLIDE 18

Model packing problems as independent set problems

◮ Example: the spherical cap packing problem

◮ As vertex set we take the unit sphere ◮ Two distinct vertices x and y are adjacent if the spherical caps

centered about x and y intersect in their interiors:

x y

◮ Optimal density is proportional to the independence number ◮ ϑ generalizes to an infinite dimensional maximization problem ◮ Use optimization duality, harmonic analysis, and real algebraic

geometry to approximate ϑ by a semidefinite program

slide-19
SLIDE 19

Model packing problems as independent set problems

◮ Example: the spherical cap packing problem

◮ As vertex set we take the unit sphere ◮ Two distinct vertices x and y are adjacent if the spherical caps

centered about x and y intersect in their interiors:

x y

◮ Optimal density is proportional to the independence number ◮ ϑ generalizes to an infinite dimensional maximization problem ◮ Use optimization duality, harmonic analysis, and real algebraic

geometry to approximate ϑ by a semidefinite program

◮ Using symmetry reduction this reduces to a linear program

known as the Delsarte LP bound

slide-20
SLIDE 20

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Sodium Chloride

slide-21
SLIDE 21

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Density: 79.3 . . . % Sodium Chloride

slide-22
SLIDE 22

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Density: 79.3 . . . % Our upper bound: 81.3 . . . % Sodium Chloride

slide-23
SLIDE 23

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Density: 79.3 . . . % Our upper bound: 81.3 . . . % Sodium Chloride

◮ Question 1: Can we use this method for optimality proofs?

slide-24
SLIDE 24

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Density: 79.3 . . . % Our upper bound: 81.3 . . . % Sodium Chloride

◮ Question 1: Can we use this method for optimality proofs? ◮ Florian and Heppes prove optimality of the following packing:

slide-25
SLIDE 25

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Density: 79.3 . . . % Our upper bound: 81.3 . . . % Sodium Chloride

◮ Question 1: Can we use this method for optimality proofs? ◮ Florian and Heppes prove optimality of the following packing: ◮ We prove ϑ is sharp for this problem, which gives a simple

  • ptimality proof
slide-26
SLIDE 26

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Density: 79.3 . . . % Our upper bound: 81.3 . . . % Sodium Chloride

◮ Question 1: Can we use this method for optimality proofs? ◮ Florian and Heppes prove optimality of the following packing: ◮ We prove ϑ is sharp for this problem, which gives a simple

  • ptimality proof

◮ We slightly improve the Cohn-Elkies bound to give the best

known bounds for sphere packing in dimensions 4 − 7 and 9

slide-27
SLIDE 27

Bounds for binary packings [L–Oliveira–Vallentin 2014]

Density: 79.3 . . . % Our upper bound: 81.3 . . . % Sodium Chloride

◮ Question 1: Can we use this method for optimality proofs? ◮ Florian and Heppes prove optimality of the following packing: ◮ We prove ϑ is sharp for this problem, which gives a simple

  • ptimality proof

◮ We slightly improve the Cohn-Elkies bound to give the best

known bounds for sphere packing in dimensions 4 − 7 and 9

◮ Question 2: Can we obtain arbitrarily good bounds?

slide-28
SLIDE 28

Energy minimization

◮ Goal: Find the ground state energy of a system of N particles

in a compact container (V, d) with pair potential h

slide-29
SLIDE 29

Energy minimization

◮ Goal: Find the ground state energy of a system of N particles

in a compact container (V, d) with pair potential h

◮ Example: In the Thomson problem we minimize

  • 1≤i<j≤N

1 xi − xj2

  • ver all sets {x1, . . . , xN} of N distinct points in S2 ⊆ R3
slide-30
SLIDE 30

Energy minimization

◮ Goal: Find the ground state energy of a system of N particles

in a compact container (V, d) with pair potential h

◮ Example: In the Thomson problem we minimize

  • 1≤i<j≤N

1 xi − xj2

  • ver all sets {x1, . . . , xN} of N distinct points in S2 ⊆ R3

◮ Here V = S2, d(x, y) = xi − xj2, and h(w) = 1/w

slide-31
SLIDE 31

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

slide-32
SLIDE 32

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

◮ Assume h(s) → ∞ as s → 0

slide-33
SLIDE 33

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

◮ Assume h(s) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x

and y are adjacent if h(d(x, y)) is large

slide-34
SLIDE 34

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

◮ Assume h(s) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x

and y are adjacent if h(d(x, y)) is large

◮ Let It be the set of independent sets with ≤ t elements

slide-35
SLIDE 35

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

◮ Assume h(s) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x

and y are adjacent if h(d(x, y)) is large

◮ Let It be the set of independent sets with ≤ t elements ◮ Let I=t be the set of independent sets with t elements

slide-36
SLIDE 36

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

◮ Assume h(s) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x

and y are adjacent if h(d(x, y)) is large

◮ Let It be the set of independent sets with ≤ t elements ◮ Let I=t be the set of independent sets with t elements ◮ These sets are compact metric spaces

slide-37
SLIDE 37

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

◮ Assume h(s) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x

and y are adjacent if h(d(x, y)) is large

◮ Let It be the set of independent sets with ≤ t elements ◮ Let I=t be the set of independent sets with t elements ◮ These sets are compact metric spaces ◮ Define f ∈ C(IN) by

f(S) =

  • h(d(x, y))

if S = {x, y} with x = y,

  • therwise
slide-38
SLIDE 38

Setup

◮ Goal: Find the ground state energy E of a system of N

particles in a compact container (V, d) with pair potential h

◮ Assume h(s) → ∞ as s → 0 ◮ Define a graph with vertex set V where two distinct vertices x

and y are adjacent if h(d(x, y)) is large

◮ Let It be the set of independent sets with ≤ t elements ◮ Let I=t be the set of independent sets with t elements ◮ These sets are compact metric spaces ◮ Define f ∈ C(IN) by

f(S) =

  • h(d(x, y))

if S = {x, y} with x = y,

  • therwise

◮ Minimal energy:

E = min

S∈I=N

  • P⊆S

f(P)

slide-39
SLIDE 39

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR

slide-40
SLIDE 40

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S

slide-41
SLIDE 41

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

slide-42
SLIDE 42

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

slide-43
SLIDE 43

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

◮ χS is a positive measure

slide-44
SLIDE 44

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

◮ χS is a positive measure ◮ χS satisfies λ(I=i) =

N

i

  • for all i
slide-45
SLIDE 45

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

◮ χS is a positive measure ◮ χS satisfies λ(I=i) =

N

i

  • for all i

◮ χS is a measure of positive type (see next slide)

slide-46
SLIDE 46

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

◮ χS is a positive measure ◮ χS satisfies λ(I=i) =

N

i

  • for all i

◮ χS is a measure of positive type (see next slide)

◮ Relaxations: For t = 1, . . . , N,

Et = min

  • λ(f) : λ ∈ M(I2t) positive measure of positive type,

λ(I=i) = N

i

  • for all 0 ≤ i ≤ 2t
slide-47
SLIDE 47

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

◮ χS is a positive measure ◮ χS satisfies λ(I=i) =

N

i

  • for all i

◮ χS is a measure of positive type (see next slide)

◮ Relaxations: For t = 1, . . . , N,

Et = min

  • λ(f) : λ ∈ M(I2t) positive measure of positive type,

λ(I=i) = N

i

  • for all 0 ≤ i ≤ 2t
  • ◮ Et is a min{2t, N}-point bound
slide-48
SLIDE 48

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

◮ χS is a positive measure ◮ χS satisfies λ(I=i) =

N

i

  • for all i

◮ χS is a measure of positive type (see next slide)

◮ Relaxations: For t = 1, . . . , N,

Et = min

  • λ(f) : λ ∈ M(I2t) positive measure of positive type,

λ(I=i) = N

i

  • for all 0 ≤ i ≤ 2t
  • ◮ Et is a min{2t, N}-point bound

E1 ≤ E2 ≤ · · · ≤ EN

slide-49
SLIDE 49

Moment methods in energy minimization

◮ For S ∈ I=N, define the measure χS = R⊆S δR ◮ We can use this measure to compute the energy of S ◮ The energy of S is given by

χS(f) =

  • f(P) dχS(P) =
  • R⊆S

f(R)

◮ This measure satisfies the following 3 properties:

◮ χS is a positive measure ◮ χS satisfies λ(I=i) =

N

i

  • for all i

◮ χS is a measure of positive type (see next slide)

◮ Relaxations: For t = 1, . . . , N,

Et = min

  • λ(f) : λ ∈ M(I2t) positive measure of positive type,

λ(I=i) = N

i

  • for all 0 ≤ i ≤ 2t
  • ◮ Et is a min{2t, N}-point bound

E1 ≤ E2 ≤ · · · ≤ EN = E

slide-50
SLIDE 50

Measures of positive type [L–Vallentin 2015]

◮ Operator:

At : C(It × It)sym → C(I2t), AtK(S) =

  • J,J′∈It:J∪J′=S

K(J, J′)

slide-51
SLIDE 51

Measures of positive type [L–Vallentin 2015]

◮ Operator:

At : C(It × It)sym → C(I2t), AtK(S) =

  • J,J′∈It:J∪J′=S

K(J, J′)

◮ This is an infinite dimensional version of the adjoint of the

  • pererator y → M(y) that maps a moment sequence to a

moment matrix

slide-52
SLIDE 52

Measures of positive type [L–Vallentin 2015]

◮ Operator:

At : C(It × It)sym → C(I2t), AtK(S) =

  • J,J′∈It:J∪J′=S

K(J, J′)

◮ This is an infinite dimensional version of the adjoint of the

  • pererator y → M(y) that maps a moment sequence to a

moment matrix

◮ Dual operator

A∗

t : M(I2t) → M(It × It)sym

slide-53
SLIDE 53

Measures of positive type [L–Vallentin 2015]

◮ Operator:

At : C(It × It)sym → C(I2t), AtK(S) =

  • J,J′∈It:J∪J′=S

K(J, J′)

◮ This is an infinite dimensional version of the adjoint of the

  • pererator y → M(y) that maps a moment sequence to a

moment matrix

◮ Dual operator

A∗

t : M(I2t) → M(It × It)sym ◮ Cone of positive definite kernels: C(It × It)0

slide-54
SLIDE 54

Measures of positive type [L–Vallentin 2015]

◮ Operator:

At : C(It × It)sym → C(I2t), AtK(S) =

  • J,J′∈It:J∪J′=S

K(J, J′)

◮ This is an infinite dimensional version of the adjoint of the

  • pererator y → M(y) that maps a moment sequence to a

moment matrix

◮ Dual operator

A∗

t : M(I2t) → M(It × It)sym ◮ Cone of positive definite kernels: C(It × It)0 ◮ Dual cone:

M(It×It)0 = {µ ∈ M(It×It)sym : µ(K) ≥ 0 for all K ∈ C(It×It)0}

slide-55
SLIDE 55

Measures of positive type [L–Vallentin 2015]

◮ Operator:

At : C(It × It)sym → C(I2t), AtK(S) =

  • J,J′∈It:J∪J′=S

K(J, J′)

◮ This is an infinite dimensional version of the adjoint of the

  • pererator y → M(y) that maps a moment sequence to a

moment matrix

◮ Dual operator

A∗

t : M(I2t) → M(It × It)sym ◮ Cone of positive definite kernels: C(It × It)0 ◮ Dual cone:

M(It×It)0 = {µ ∈ M(It×It)sym : µ(K) ≥ 0 for all K ∈ C(It×It)0}

◮ A measure λ ∈ M(I2t) is of positive type if

A∗

t λ ∈ M(It × It)0

slide-56
SLIDE 56

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E

slide-57
SLIDE 57

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E ◮ Sufficient condition for the existence of an extension of a

feasible solution λ ∈ M(I2t) of Et to a feasible solution of EN

slide-58
SLIDE 58

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E ◮ Sufficient condition for the existence of an extension of a

feasible solution λ ∈ M(I2t) of Et to a feasible solution of EN

◮ Positive semidefinite form f, g = A∗ t λ(f ⊗ g) on C(It)

slide-59
SLIDE 59

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E ◮ Sufficient condition for the existence of an extension of a

feasible solution λ ∈ M(I2t) of Et to a feasible solution of EN

◮ Positive semidefinite form f, g = A∗ t λ(f ⊗ g) on C(It) ◮ Define Nt(λ) = {f ∈ C(It) : f, f = 0}

slide-60
SLIDE 60

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E ◮ Sufficient condition for the existence of an extension of a

feasible solution λ ∈ M(I2t) of Et to a feasible solution of EN

◮ Positive semidefinite form f, g = A∗ t λ(f ⊗ g) on C(It) ◮ Define Nt(λ) = {f ∈ C(It) : f, f = 0}

slide-61
SLIDE 61

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E ◮ Sufficient condition for the existence of an extension of a

feasible solution λ ∈ M(I2t) of Et to a feasible solution of EN

◮ Positive semidefinite form f, g = A∗ t λ(f ⊗ g) on C(It) ◮ Define Nt(λ) = {f ∈ C(It) : f, f = 0} ◮ If λ ∈ M(I2t) is of positive type and

C(It) = C(It−1) + Nt(λ), then we can extend λ to a measure λ′ ∈ M(IN) that is of positive type

slide-62
SLIDE 62

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E ◮ Sufficient condition for the existence of an extension of a

feasible solution λ ∈ M(I2t) of Et to a feasible solution of EN

◮ Positive semidefinite form f, g = A∗ t λ(f ⊗ g) on C(It) ◮ Define Nt(λ) = {f ∈ C(It) : f, f = 0} ◮ If λ ∈ M(I2t) is of positive type and

C(It) = C(It−1) + Nt(λ), then we can extend λ to a measure λ′ ∈ M(IN) that is of positive type

◮ λ(I=i) =

N

i

  • for 0 ≤ i ≤ 2t ⇒ λ′(I=i) =

N

i

  • for 0 ≤ i ≤ N
slide-63
SLIDE 63

Flat extensions

◮ Recall: E1 ≤ E2 ≤ · · · ≤ EN = E ◮ Sufficient condition for the existence of an extension of a

feasible solution λ ∈ M(I2t) of Et to a feasible solution of EN

◮ Positive semidefinite form f, g = A∗ t λ(f ⊗ g) on C(It) ◮ Define Nt(λ) = {f ∈ C(It) : f, f = 0} ◮ If λ ∈ M(I2t) is of positive type and

C(It) = C(It−1) + Nt(λ), then we can extend λ to a measure λ′ ∈ M(IN) that is of positive type

◮ λ(I=i) =

N

i

  • for 0 ≤ i ≤ 2t ⇒ λ′(I=i) =

N

i

  • for 0 ≤ i ≤ N

If an optimal solution λ of Et satisfies C(It) = C(It−1)+Nt(λ), then Et = EN = E

slide-64
SLIDE 64

Computations using the dual hierarchy

slide-65
SLIDE 65

Computations using the dual hierarchy

E

slide-66
SLIDE 66

Computations using the dual hierarchy

Et E

slide-67
SLIDE 67

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem

slide-68
SLIDE 68

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t

slide-69
SLIDE 69

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t ◮ In E∗ t we optimize over kernels K ∈ C(It × It)0

slide-70
SLIDE 70

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t ◮ In E∗ t we optimize over kernels K ∈ C(It × It)0 ◮ Idea:

slide-71
SLIDE 71

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t ◮ In E∗ t we optimize over kernels K ∈ C(It × It)0 ◮ Idea:

  • 1. Express K in terms of its Fourier coefficients
slide-72
SLIDE 72

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t ◮ In E∗ t we optimize over kernels K ∈ C(It × It)0 ◮ Idea:

  • 1. Express K in terms of its Fourier coefficients
  • 2. Set all but finitely many of these coefficients to 0
slide-73
SLIDE 73

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t ◮ In E∗ t we optimize over kernels K ∈ C(It × It)0 ◮ Idea:

  • 1. Express K in terms of its Fourier coefficients
  • 2. Set all but finitely many of these coefficients to 0
  • 3. Optimize over the remaining coefficients
slide-74
SLIDE 74

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t ◮ In E∗ t we optimize over kernels K ∈ C(It × It)0 ◮ Idea:

  • 1. Express K in terms of its Fourier coefficients
  • 2. Set all but finitely many of these coefficients to 0
  • 3. Optimize over the remaining coefficients

◮ To do this we need a group Γ with an action on It

slide-75
SLIDE 75

Computations using the dual hierarchy

Et E∗

t

E Dual maximization problem Strong duality holds: Et = E∗

t ◮ In E∗ t we optimize over kernels K ∈ C(It × It)0 ◮ Idea:

  • 1. Express K in terms of its Fourier coefficients
  • 2. Set all but finitely many of these coefficients to 0
  • 3. Optimize over the remaining coefficients

◮ To do this we need a group Γ with an action on It ◮ In principle this can be the trivial group, but for symmetry

reduction a bigger group is better

slide-76
SLIDE 76

Harmonic analysis on subset spaces

◮ Let Γ be compact group with an action on V

slide-77
SLIDE 77

Harmonic analysis on subset spaces

◮ Let Γ be compact group with an action on V ◮ Example: Γ = O(3) and V = S2 ⊆ R3

slide-78
SLIDE 78

Harmonic analysis on subset spaces

◮ Let Γ be compact group with an action on V ◮ Example: Γ = O(3) and V = S2 ⊆ R3 ◮ Assume the metric is Γ-invariant:

d(γx, γy) = d(x, y) for all x, y ∈ V and γ ∈ Γ

slide-79
SLIDE 79

Harmonic analysis on subset spaces

◮ Let Γ be compact group with an action on V ◮ Example: Γ = O(3) and V = S2 ⊆ R3 ◮ Assume the metric is Γ-invariant:

d(γx, γy) = d(x, y) for all x, y ∈ V and γ ∈ Γ

◮ Then the action extends to an action on It by

γ∅ = ∅ and γ{x1, . . . , xt} = {γx1, . . . , γxt}

slide-80
SLIDE 80

Harmonic analysis on subset spaces

◮ Let Γ be compact group with an action on V ◮ Example: Γ = O(3) and V = S2 ⊆ R3 ◮ Assume the metric is Γ-invariant:

d(γx, γy) = d(x, y) for all x, y ∈ V and γ ∈ Γ

◮ Then the action extends to an action on It by

γ∅ = ∅ and γ{x1, . . . , xt} = {γx1, . . . , γxt}

◮ By an “averaging argument” we may assume

K ∈ C(It × It)0 to be Γ-invariant: K(γJ, γJ′) = K(J, J′) for all γ ∈ Γ and J, J′ ∈ It

slide-81
SLIDE 81

Harmonic analysis on subset spaces

◮ Fourier inversion formula:

K(x, y) =

  • π∈ˆ

Γ mπ

  • i,j=1

ˆ K(π)i,jZπ(x, y)i,j

slide-82
SLIDE 82

Harmonic analysis on subset spaces

◮ Fourier inversion formula:

K(x, y) =

  • π∈ˆ

Γ mπ

  • i,j=1

ˆ K(π)i,jZπ(x, y)i,j

◮ The Fourier matrices ˆ

K(π) are positive semidefinite

slide-83
SLIDE 83

Harmonic analysis on subset spaces

◮ Fourier inversion formula:

K(x, y) =

  • π∈ˆ

Γ mπ

  • i,j=1

ˆ K(π)i,jZπ(x, y)i,j

◮ The Fourier matrices ˆ

K(π) are positive semidefinite

◮ The zonal matrices Zπ(x, y) are fixed matrices that depend on

It and Γ

slide-84
SLIDE 84

Harmonic analysis on subset spaces

◮ Fourier inversion formula:

K(x, y) =

  • π∈ˆ

Γ mπ

  • i,j=1

ˆ K(π)i,jZπ(x, y)i,j

◮ The Fourier matrices ˆ

K(π) are positive semidefinite

◮ The zonal matrices Zπ(x, y) are fixed matrices that depend on

It and Γ (These matrices take the role of the exponential functions in the familiar Fourier transform)

slide-85
SLIDE 85

Harmonic analysis on subset spaces

◮ Fourier inversion formula:

K(x, y) =

  • π∈ˆ

Γ mπ

  • i,j=1

ˆ K(π)i,jZπ(x, y)i,j

◮ The Fourier matrices ˆ

K(π) are positive semidefinite

◮ The zonal matrices Zπ(x, y) are fixed matrices that depend on

It and Γ (These matrices take the role of the exponential functions in the familiar Fourier transform)

◮ To construct the matrices Zπ(x, y) we need to “perform the

harmonic analysis of It with respect to Γ”

slide-86
SLIDE 86

Harmonic analysis on subset spaces

◮ The action of Γ on It extends to a linear action of Γ on C(It)

by γf(S) = f(γ−1S)

slide-87
SLIDE 87

Harmonic analysis on subset spaces

◮ The action of Γ on It extends to a linear action of Γ on C(It)

by γf(S) = f(γ−1S)

◮ By performing the harmonic analysis of It with respect to Γ

we mean: Decompose C(It) as a direct sum of irreducible (smallest possible) Γ-invariant subspaces

slide-88
SLIDE 88

Harmonic analysis on subset spaces

◮ The action of Γ on It extends to a linear action of Γ on C(It)

by γf(S) = f(γ−1S)

◮ By performing the harmonic analysis of It with respect to Γ

we mean: Decompose C(It) as a direct sum of irreducible (smallest possible) Γ-invariant subspaces

◮ We give a procedure to perform the harmonic analysis of It

with respect to Γ given that we know enough about the harmonic analysis of V .

slide-89
SLIDE 89

Harmonic analysis on subset spaces

◮ The action of Γ on It extends to a linear action of Γ on C(It)

by γf(S) = f(γ−1S)

◮ By performing the harmonic analysis of It with respect to Γ

we mean: Decompose C(It) as a direct sum of irreducible (smallest possible) Γ-invariant subspaces

◮ We give a procedure to perform the harmonic analysis of It

with respect to Γ given that we know enough about the harmonic analysis of V . In particular we must know how to decompose tensor products of irreducible subspaces of C(V ) into irreducibles

slide-90
SLIDE 90

Harmonic analysis on subset spaces

◮ The action of Γ on It extends to a linear action of Γ on C(It)

by γf(S) = f(γ−1S)

◮ By performing the harmonic analysis of It with respect to Γ

we mean: Decompose C(It) as a direct sum of irreducible (smallest possible) Γ-invariant subspaces

◮ We give a procedure to perform the harmonic analysis of It

with respect to Γ given that we know enough about the harmonic analysis of V . In particular we must know how to decompose tensor products of irreducible subspaces of C(V ) into irreducibles

◮ We do this explicitly for V = S2, Γ = O(3), and t = 2

(by using Clebsch–Gordan coefficients)

slide-91
SLIDE 91

Harmonic analysis on subset spaces

◮ The action of Γ on It extends to a linear action of Γ on C(It)

by γf(S) = f(γ−1S)

◮ By performing the harmonic analysis of It with respect to Γ

we mean: Decompose C(It) as a direct sum of irreducible (smallest possible) Γ-invariant subspaces

◮ We give a procedure to perform the harmonic analysis of It

with respect to Γ given that we know enough about the harmonic analysis of V . In particular we must know how to decompose tensor products of irreducible subspaces of C(V ) into irreducibles

◮ We do this explicitly for V = S2, Γ = O(3), and t = 2

(by using Clebsch–Gordan coefficients)

◮ We use this to lower bound E∗ 2 by maximization problems

that have finitely many positive semidefinite matrix variables (but still infinitely many constraints)

slide-92
SLIDE 92

Invariant theory

◮ These constraints are of the form

p(x1, . . . , x4) ≥ 0 for {x1, x2, x3, x4} ∈ I=4, where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables

slide-93
SLIDE 93

Invariant theory

◮ These constraints are of the form

p(x1, . . . , x4) ≥ 0 for {x1, x2, x3, x4} ∈ I=4, where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables

◮ These polynomials satisfy

p(γx1, . . . , γx4) = p(x1, . . . , x4) for x1, . . . , x4 ∈ S2 and γ ∈ O(3)

slide-94
SLIDE 94

Invariant theory

◮ These constraints are of the form

p(x1, . . . , x4) ≥ 0 for {x1, x2, x3, x4} ∈ I=4, where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables

◮ These polynomials satisfy

p(γx1, . . . , γx4) = p(x1, . . . , x4) for x1, . . . , x4 ∈ S2 and γ ∈ O(3)

◮ By a theorem of invariant theory we can write p as a

polynomial in the inner products: p(x1, x2, x3, x4) = q(x1 · x2, . . . , x3 · x4)

slide-95
SLIDE 95

Invariant theory

◮ These constraints are of the form

p(x1, . . . , x4) ≥ 0 for {x1, x2, x3, x4} ∈ I=4, where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables

◮ These polynomials satisfy

p(γx1, . . . , γx4) = p(x1, . . . , x4) for x1, . . . , x4 ∈ S2 and γ ∈ O(3)

◮ By a theorem of invariant theory we can write p as a

polynomial in the inner products: p(x1, x2, x3, x4) = q(x1 · x2, . . . , x3 · x4)

◮ This theorem is nonconstructive → We solve large sparse

linear systems to perform this transformation explicitly

slide-96
SLIDE 96

Invariant theory

◮ These constraints are of the form

p(x1, . . . , x4) ≥ 0 for {x1, x2, x3, x4} ∈ I=4, where p is a polynomial whose coefficients depend linearly on the entries of the matrix variables

◮ These polynomials satisfy

p(γx1, . . . , γx4) = p(x1, . . . , x4) for x1, . . . , x4 ∈ S2 and γ ∈ O(3)

◮ By a theorem of invariant theory we can write p as a

polynomial in the inner products: p(x1, x2, x3, x4) = q(x1 · x2, . . . , x3 · x4)

◮ This theorem is nonconstructive → We solve large sparse

linear systems to perform this transformation explicitly

◮ Now we have constraints of the form

q(u1, . . . , ul) ≥ 0 for (u1, . . . , ul) ∈ some semialgebraic set

slide-97
SLIDE 97

Sums of squares characterizations

◮ Putinar: Every positive polynomial on a compact set

S = {x ∈ Rn : g1(x) ≥ 0, . . . , gm(x) ≥ 0}, where the set {g1, . . . , gm} has the Archimedean property, is of the form f(x) =

m

  • i=0

gi(x)si(x), where g0 := 1

slide-98
SLIDE 98

Sums of squares characterizations

◮ Putinar: Every positive polynomial on a compact set

S = {x ∈ Rn : g1(x) ≥ 0, . . . , gm(x) ≥ 0}, where the set {g1, . . . , gm} has the Archimedean property, is of the form f(x) =

m

  • i=0

gi(x)si(x), where g0 := 1

◮ The sum of squares si can be modeled using positive

semidefinite matrices

slide-99
SLIDE 99

Sums of squares characterizations

◮ Putinar: Every positive polynomial on a compact set

S = {x ∈ Rn : g1(x) ≥ 0, . . . , gm(x) ≥ 0}, where the set {g1, . . . , gm} has the Archimedean property, is of the form f(x) =

m

  • i=0

gi(x)si(x), where g0 := 1

◮ The sum of squares si can be modeled using positive

semidefinite matrices

◮ We use this to go from infinitely many constraints to finitely

many semidefinite constraints

slide-100
SLIDE 100

Sums of squares characterizations

◮ Putinar: Every positive polynomial on a compact set

S = {x ∈ Rn : g1(x) ≥ 0, . . . , gm(x) ≥ 0}, where the set {g1, . . . , gm} has the Archimedean property, is of the form f(x) =

m

  • i=0

gi(x)si(x), where g0 := 1

◮ The sum of squares si can be modeled using positive

semidefinite matrices

◮ We use this to go from infinitely many constraints to finitely

many semidefinite constraints

◮ In energy minimization the particles are interchangeable

slide-101
SLIDE 101

Sums of squares characterizations

◮ Putinar: Every positive polynomial on a compact set

S = {x ∈ Rn : g1(x) ≥ 0, . . . , gm(x) ≥ 0}, where the set {g1, . . . , gm} has the Archimedean property, is of the form f(x) =

m

  • i=0

gi(x)si(x), where g0 := 1

◮ The sum of squares si can be modeled using positive

semidefinite matrices

◮ We use this to go from infinitely many constraints to finitely

many semidefinite constraints

◮ In energy minimization the particles are interchangeable ◮ This means

p(xσ(1), . . . , xσ(4)) = p(x1, . . . , x4) for all σ ∈ S4

slide-102
SLIDE 102

Sums of squares characterizations

◮ Putinar: Every positive polynomial on a compact set

S = {x ∈ Rn : g1(x) ≥ 0, . . . , gm(x) ≥ 0}, where the set {g1, . . . , gm} has the Archimedean property, is of the form f(x) =

m

  • i=0

gi(x)si(x), where g0 := 1

◮ The sum of squares si can be modeled using positive

semidefinite matrices

◮ We use this to go from infinitely many constraints to finitely

many semidefinite constraints

◮ In energy minimization the particles are interchangeable ◮ This means

p(xσ(1), . . . , xσ(4)) = p(x1, . . . , x4) for all σ ∈ S4

◮ This translates into interesting symmetries of the

q(u1, . . . , ul) polynomials

slide-103
SLIDE 103

Sums of squares characterizations

◮ Symmetrization of Putinar’s theorem to exploit the symmetry

in the particles

slide-104
SLIDE 104

Sums of squares characterizations

◮ Symmetrization of Putinar’s theorem to exploit the symmetry

in the particles

◮ Assume the set {g0, . . . , gm} is Γ-invariant

slide-105
SLIDE 105

Sums of squares characterizations

◮ Symmetrization of Putinar’s theorem to exploit the symmetry

in the particles

◮ Assume the set {g0, . . . , gm} is Γ-invariant ◮ Denote by Γgi the stabilizer subgroup of Γ with respect to gi

slide-106
SLIDE 106

Sums of squares characterizations

◮ Symmetrization of Putinar’s theorem to exploit the symmetry

in the particles

◮ Assume the set {g0, . . . , gm} is Γ-invariant ◮ Denote by Γgi the stabilizer subgroup of Γ with respect to gi

A Γ-invariant polynomial that has a Putinar representation can be written as p = m

i=0 gisi, where si is a Γgi-invariant

sum of squares polynomial

slide-107
SLIDE 107

Sums of squares characterizations

◮ Symmetrization of Putinar’s theorem to exploit the symmetry

in the particles

◮ Assume the set {g0, . . . , gm} is Γ-invariant ◮ Denote by Γgi the stabilizer subgroup of Γ with respect to gi

A Γ-invariant polynomial that has a Putinar representation can be written as p = m

i=0 gisi, where si is a Γgi-invariant

sum of squares polynomial

◮ We can represent the Γgi-invariant sum of squares

polynomials si using block diagonalized positive semidefinite matrices [Gatermann–Parillo 2004]

slide-108
SLIDE 108

Sums of squares characterizations

◮ Symmetrization of Putinar’s theorem to exploit the symmetry

in the particles

◮ Assume the set {g0, . . . , gm} is Γ-invariant ◮ Denote by Γgi the stabilizer subgroup of Γ with respect to gi

A Γ-invariant polynomial that has a Putinar representation can be written as p = m

i=0 gisi, where si is a Γgi-invariant

sum of squares polynomial

◮ We can represent the Γgi-invariant sum of squares

polynomials si using block diagonalized positive semidefinite matrices [Gatermann–Parillo 2004]

◮ This gives significant computational savings for our problems

slide-109
SLIDE 109

Computational results for the Thomson problem

◮ In the Thomson problem we take

V = S2, d(x, y) = x − y2, and h(w) = 1 w

slide-110
SLIDE 110

Computational results for the Thomson problem

◮ In the Thomson problem we take

V = S2, d(x, y) = x − y2, and h(w) = 1 w

◮ The Thomson problem has been solved for:

3 (1912), 4, 6 (1992), 12 (1996), and 5 (2010) particles

slide-111
SLIDE 111

Computational results for the Thomson problem

◮ In the Thomson problem we take

V = S2, d(x, y) = x − y2, and h(w) = 1 w

◮ The Thomson problem has been solved for:

3 (1912), 4, 6 (1992), 12 (1996), and 5 (2010) particles

◮ E∗ 1 is sharp for 3, 4, 6, and 12 particles (Yudin’s LP bound)

slide-112
SLIDE 112

Computational results for the Thomson problem

◮ In the Thomson problem we take

V = S2, d(x, y) = x − y2, and h(w) = 1 w

◮ The Thomson problem has been solved for:

3 (1912), 4, 6 (1992), 12 (1996), and 5 (2010) particles

◮ E∗ 1 is sharp for 3, 4, 6, and 12 particles (Yudin’s LP bound) ◮ We compute E∗ 2 using a semidefinite programming solver

slide-113
SLIDE 113

Computational results for the Thomson problem

◮ In the Thomson problem we take

V = S2, d(x, y) = x − y2, and h(w) = 1 w

◮ The Thomson problem has been solved for:

3 (1912), 4, 6 (1992), 12 (1996), and 5 (2010) particles

◮ E∗ 1 is sharp for 3, 4, 6, and 12 particles (Yudin’s LP bound) ◮ We compute E∗ 2 using a semidefinite programming solver ◮ This is the first time a four 4-bound has been computed for a

continuous problem

slide-114
SLIDE 114

Computational results for the Thomson problem

◮ In the Thomson problem we take

V = S2, d(x, y) = x − y2, and h(w) = 1 w

◮ The Thomson problem has been solved for:

3 (1912), 4, 6 (1992), 12 (1996), and 5 (2010) particles

◮ E∗ 1 is sharp for 3, 4, 6, and 12 particles (Yudin’s LP bound) ◮ We compute E∗ 2 using a semidefinite programming solver ◮ This is the first time a four 4-bound has been computed for a

continuous problem

◮ We show E∗ 2 is sharp for 5 particles on S2 (up to solver

precision), which suggests we can use E∗

2 to derive a small

proof of optimality for this problem

slide-115
SLIDE 115

Phase transitions

◮ The Riesz s-energy of a configuration {x1, . . . , xN} ⊆ S2:

  • 1≤i<j≤N

1 xi − xjs

2

slide-116
SLIDE 116

Phase transitions

◮ The Riesz s-energy of a configuration {x1, . . . , xN} ⊆ S2:

  • 1≤i<j≤N

1 xi − xjs

2 ◮ It is believed that the system of 5 particles on S2 admits a

phase transition at s ≈ 15.05

slide-117
SLIDE 117

Phase transitions

◮ The Riesz s-energy of a configuration {x1, . . . , xN} ⊆ S2:

  • 1≤i<j≤N

1 xi − xjs

2 ◮ It is believed that the system of 5 particles on S2 admits a

phase transition at s ≈ 15.05

◮ For small s the triangular bipyramid is believed to be optimal

slide-118
SLIDE 118

Phase transitions

◮ The Riesz s-energy of a configuration {x1, . . . , xN} ⊆ S2:

  • 1≤i<j≤N

1 xi − xjs

2 ◮ It is believed that the system of 5 particles on S2 admits a

phase transition at s ≈ 15.05

◮ For small s the triangular bipyramid is believed to be optimal ◮ For large s the square pyramid is believed to be optimal

slide-119
SLIDE 119

Phase transitions

◮ The Riesz s-energy of a configuration {x1, . . . , xN} ⊆ S2:

  • 1≤i<j≤N

1 xi − xjs

2 ◮ It is believed that the system of 5 particles on S2 admits a

phase transition at s ≈ 15.05

◮ For small s the triangular bipyramid is believed to be optimal ◮ For large s the square pyramid is believed to be optimal

◮ We show E∗ 2 is sharp for s = 1, 2, 3, 4 (up to solver precision)

slide-120
SLIDE 120

Phase transitions

◮ The Riesz s-energy of a configuration {x1, . . . , xN} ⊆ S2:

  • 1≤i<j≤N

1 xi − xjs

2 ◮ It is believed that the system of 5 particles on S2 admits a

phase transition at s ≈ 15.05

◮ For small s the triangular bipyramid is believed to be optimal ◮ For large s the square pyramid is believed to be optimal

◮ We show E∗ 2 is sharp for s = 1, 2, 3, 4 (up to solver precision) ◮ It would be very interesting if E∗ 2 is sharp for all s

◮ Lower bound that stays sharp throughout a phase transition ◮ Local-to-global behaviour in confined geometries

slide-121
SLIDE 121

Thank you!

◮ D. de Laat, Moment methods in energy minimization: New bounds

for Riesz minimal energy problems, In preparation.

◮ D. de Laat, Moment methods in extremal geometry, PhD thesis, Delft

University of Technology, 2016.

◮ D. de Laat, F. Vallentin, A semidefinite programming hierarchy for

packing problems in discrete geometry, Math. Program., Ser. B 151 (2015), 529-553.

◮ D. de Laat, F.M. Oliveira, F. Vallentin, Upper bounds for packings of

spheres of several radii, Forum Math. Sigma 2 (2014), e23 (42 pages).

Image credits: Sphere packing: Grek L Elliptope: Philipp Rostalski Sodium Chloride: Ben Mills