Polynomial Optimzation in Quantum Information Theory Sabine - - PowerPoint PPT Presentation

polynomial optimzation in quantum information theory
SMART_READER_LITE
LIVE PREVIEW

Polynomial Optimzation in Quantum Information Theory Sabine - - PowerPoint PPT Presentation

Polynomial Optimzation in Quantum Information Theory Sabine Burgdorf University of Konstanz ICERM - 2018 Real Algebraic Geometry and Optimization 1 Warm Up Entanglement is one of the key features in Quantum Information Bell 64:


slide-1
SLIDE 1

1

Polynomial Optimzation in Quantum Information Theory

Sabine Burgdorf

University of Konstanz

ICERM - 2018 Real Algebraic Geometry and Optimization

slide-2
SLIDE 2

2

Warm Up

◮ Entanglement is one of the key features in Quantum Information ◮ Bell ’64:

Classical C Quantum Q

◮ How to distinguish C and Q? ◮ What is the correct definition for Q? Does it matter? ◮ Can Polynomial Optimization help to understand these sets?

slide-3
SLIDE 3

3

RAG and POP basics

Polynomial Optimization

◮ f ∈ R[X] polynomial in commuting variables ◮ g0 = 1, g1, . . . , gr ∈ R[X] defining a semi-algebraic set:

K = {a ∈ Rn | g0(a) ≥ 0, . . . , gr(a) ≥ 0}

◮ Want to minimize f over K

f∗ = inf f(a) s.t. a ∈ K = sup a ∈ R s.t. f − a ≥ 0 on K

◮ NP-hard

slide-4
SLIDE 4

4

RAG and POP basics

RAG helps f∗ = sup a ∈ R s.t. f − a ≥ 0 on K NP-hard

◮ M(g) := {p = j h2 j gij for some hi ∈ R[X]} ◮ sos relaxation

fsos = sup a ∈ R s.t. f − a ∈ M(g) "SDP"

slide-5
SLIDE 5

4

RAG and POP basics

RAG helps f∗ = sup a ∈ R s.t. f − a ≥ 0 on K NP-hard

◮ M(g) := {p = j h2 j gij for some hi ∈ R[X]} ◮ sos relaxation

fsos = sup a ∈ R s.t. f − a ∈ M(g) "SDP"

◮ fsos is always a lower bound

but might be strict x4

1x2 2 + x2 1x4 2 − 3x2 1x2 2 + 1 ◮ If M(g) is archimedean:

f∗ = fsos

slide-6
SLIDE 6

5

RAG and POP basics

SOS hierarchy

◮ M(g)t := {p = j h2 j gij for some hi ∈ R[X]t} ◮ sos hierarchy

ft = sup a ∈ R s.t. f − a ∈ M(g)t SDP

◮ We have

◮ ft ≤ ft+1 ≤ f∗ ◮ ft converges to fsos as t → ∞ ◮ If M(g) is archimedean: fsos = f∗

slide-7
SLIDE 7

5

RAG and POP basics

SOS hierarchy

◮ M(g)t := {p = j h2 j gij for some hi ∈ R[X]t} ◮ sos hierarchy

ft = sup a ∈ R s.t. f − a ∈ M(g)t SDP

◮ We have

◮ ft ≤ ft+1 ≤ f∗ ◮ ft converges to fsos as t → ∞ ◮ If M(g) is archimedean: fsos = f∗

◮ Certificate of exactness:

◮ Flatness of dual solution ◮ Allows extraction of optimizers

slide-8
SLIDE 8

6

NC-RAG and NC-POP

NC Polynomials

◮ Want to replace scalar variables by matrices/operators ◮ Free algebra RX with noncommuting variables X1, . . . , Xn ◮ Polynomial

f =

  • w

fww

◮ Let A ∈ (Sd)n: f(A) = f1Id + fX1A1 + fX2X1A2A1 . . .

slide-9
SLIDE 9

6

NC-RAG and NC-POP

NC Polynomials

◮ Want to replace scalar variables by matrices/operators ◮ Free algebra RX with noncommuting variables X1, . . . , Xn ◮ Polynomial

f =

  • w

fww

◮ Let A ∈ (Sd)n: f(A) = f1Id + fX1A1 + fX2X1A2A1 . . . ◮ Add involution ∗ on RX

◮ fixes R and {X1, . . . , Xn} pointwise ◮ X ∗

i = Xi

◮ Consequence

f ∗f(A) = f(A)Tf(A) 0

slide-10
SLIDE 10

7

NC-RAG and NC-POP

NC Polynomial Optimization

◮ Let f ∈ RX ◮ g0 = 1, g1, . . . , gr ∈ RX defining a semi-algebraic set:

K = {A | g0(A) 0, . . . , gr(A) 0}

◮ Want to minimize f over K

f∗ = sup a ∈ R s.t. f − a ≥ 0 on K

slide-11
SLIDE 11

8

NC-RAG and NC-POP

Eigenvalue optimization

◮ Let f ∈ RX

fnc = sup a ∈ R s.t. f − a 0 on K NP-hard

◮ Observation: Checking if f = i h∗ i hi is an SDP

so as well checking f =

j h∗ j gijhj (with degree bounds)

slide-12
SLIDE 12

8

NC-RAG and NC-POP

Eigenvalue optimization

◮ Let f ∈ RX

fnc = sup a ∈ R s.t. f − a 0 on K NP-hard

◮ Observation: Checking if f = i h∗ i hi is an SDP

so as well checking f =

j h∗ j gijhj (with degree bounds) ◮ sos relaxation

Mnc(g) := {p =

j h∗ j gijhj for some hi ∈ RX}

fsos = sup a ∈ R s.t. f − a ∈ Mnc(g)

◮ Fact: fsos ≤ fnc ◮ Theorem (Helton et al.): If Mnc(g) is archimedean, then fsos = fnc.

slide-13
SLIDE 13

9

NC-RAG and NC-POP

Eigenvalue optimization

◮ Let f ∈ RX

fnc = sup a ∈ R s.t. f − a 0 on K NP-hard

◮ Mnc(g)t := {p = j h∗ j gijhj for some hj ∈ RXt} ◮ sos hierarchy

ft = sup a ∈ R s.t. f − a ∈ Mnc(g)t SDP

◮ ft ≤ ft+1 ≤ fnc but inequalities might be strict ◮ ft converges to fsos as t → ∞ ◮ If Mnc(g) is archimedean: fsos = fnc and hence ft → fnc as t → ∞

slide-14
SLIDE 14

10

NC-RAG and NC-POP

Trace optimization

◮ Let f ∈ RX

ftr = sup a ∈ R s.t. Tr(f − a) ≥ 0 on K NP-hard

◮ K contains only operators, for which a trace is defined

slide-15
SLIDE 15

10

NC-RAG and NC-POP

Trace optimization

◮ Let f ∈ RX

ftr = sup a ∈ R s.t. Tr(f − a) ≥ 0 on K NP-hard

◮ K contains only operators, for which a trace is defined ◮ If f = j h∗ j gijhj + k[pk, qk] then Tr(f(A)) ≥ 0 for all A ∈ K ◮ sos relaxation

Mtr(g) := {

j h∗ j gijhj for some hi ∈ RX} + [RX, RX]

fsos = sup a ∈ R s.t. f − a ∈ Mtr(g)

◮ Fact: fsos ≤ ftr ◮ Theorem (B.,Klep et al.): If Mtr(g) is archimedean, then fsos = ftr.

slide-16
SLIDE 16

11

NC-RAG and NC-POP

Trace optimization

◮ Let f ∈ RX

ftr = sup a ∈ R s.t. Tr(f − a) ≥ 0 on K NP-hard

◮ Mtr(g)t := { j h∗ j gijhj for some hj ∈ RXt} + [RX, RX] ◮ sos hierarchy

ft = sup a ∈ R s.t. f − a ∈ Mtr(g)t SDP

◮ ft ≤ ft+1 ≤ ftr but inequalities might be strict ◮ ft converges to fsos as t → ∞ ◮ If Mtr(g) is archimedean: fsos = ftr and hence ft → ftr as t → ∞

slide-17
SLIDE 17

12

Back to Quantum Information

◮ Entanglement is one of the key features in Quantum Information ◮ Bell ’64:

Classical C Quantum Q

◮ How to distinguish C and Q? ◮ What is the correct definition for Q? Does it matter? ◮ Can Polynomial Optimization help to understand these sets?

slide-18
SLIDE 18

13

Basics of quantum theory

◮ A quantum system corresponds to a Hilbert space H ◮ Its states are unit vectors on H

slide-19
SLIDE 19

13

Basics of quantum theory

◮ A quantum system corresponds to a Hilbert space H ◮ Its states are unit vectors on H ◮ A state on a composite system is a unit vector ψ on a tensor

Hilbert space, e.g. HA ⊗ HB

◮ ψ is entangled if it is not a product state

ψA ⊗ ψB with ψA ∈ HA, ψB ∈ HB

slide-20
SLIDE 20

13

Basics of quantum theory

◮ A quantum system corresponds to a Hilbert space H ◮ Its states are unit vectors on H ◮ A state on a composite system is a unit vector ψ on a tensor

Hilbert space, e.g. HA ⊗ HB

◮ ψ is entangled if it is not a product state

ψA ⊗ ψB with ψA ∈ HA, ψB ∈ HB

◮ A state ψ ∈ H can be measured

◮ outcomes a ∈ A ◮ POVM: a family {Ea}a∈A ⊆ B(H) with Ea 0 and

a∈A Ea = 1

◮ probablity of getting outcome a is p(a) = ψTEaψ.

slide-21
SLIDE 21

14

Nonlocal bipartite correlations

◮ Question sets S, T, Answer sets A, B ◮ No (classical) communication

t s

a b

◮ Which correlations p(a, b | s, t) are possible?

slide-22
SLIDE 22

15

Correlations

Classical strategy C

Independent probability distributions {pa

s}a and {pb t }b:

p(a, b | s, t) = pa

s · pb t

shared randomness: allow convex combinations

slide-23
SLIDE 23

15

Correlations

Classical strategy C

Independent probability distributions {pa

s}a and {pb t }b:

p(a, b | s, t) = pa

s · pb t

shared randomness: allow convex combinations

Quantum strategy Q

POVMs {Ea

s }a and {F b t }b on Hilbert spaces HA, HB, ψ ∈ HA ⊗ HB:

p(a, b | s, t) = ψT(Ea

s ⊗ F b t )ψ ◮ Nonlocality: (Ea s ⊗ 1)(1 ⊗ F b t ) = (1 ⊗ F b t )(Ea s ⊗ 1) ◮ If ψ = ψA ⊗ ψB then we have classical correlation

slide-24
SLIDE 24

16

More correlations

Quantum strategy Q

POVMs {Ea

s }a and {F b t }b on Hilbert spaces HA, HB, ψ ∈ HA ⊗ HB:

p(a, b | s, t) = ψT(Ea

s ⊗ F b t )ψ

slide-25
SLIDE 25

16

More correlations

Quantum strategy Q

POVMs {Ea

s }a and {F b t }b on Hilbert spaces HA, HB, ψ ∈ HA ⊗ HB:

p(a, b | s, t) = ψT(Ea

s ⊗ F b t )ψ

Quantum strategy Qc

POVMs {Ea

s }a and {F b t }b on a joint Hilbert space, but [Ea x , F b y ] = 0:

p(a, b | s, t) = ψT(Ea

s · F b t )ψ

Fact

C ⊆ Q ⊆ Q ⊆ Qc

slide-26
SLIDE 26

17

Tsirelson’s problem

Fact

C ⊆ Q ⊆ Q ⊆ Qc

◮ Bell: C = Q ◮ closure conjecture [Slofstra ’16]: Q = Q ◮ weak Tsirelson [Slofstra ’16]: Q = Qc ◮ Dykema et al. ’17: Concrete example in a decent subset of Q ◮ strong Tsirelson (open): Is Q = Qc? ◮ strong Tsirelson is equivalent to Connes embedding problem

slide-27
SLIDE 27

18

Nonlocal games

◮ Characterized by

◮ 2 sets of questions S, T, asked with probability distribution π ◮ 2 sets of answers A, B ◮ A winning predicate V : A × B × S × T → {0, 1}

slide-28
SLIDE 28

18

Nonlocal games

◮ Characterized by

◮ 2 sets of questions S, T, asked with probability distribution π ◮ 2 sets of answers A, B ◮ A winning predicate V : A × B × S × T → {0, 1}

◮ Winning probability (value of the game)

ω = sup

p

  • s∈S,t∈T

π(s, t)

  • a∈A,b∈B

V(a, b; s, t)p(a, b|s, t) = sup

p

  • a,b,s,t

fabstp(a, b | s, t)

◮ optimize over correlations p ∈ {C, Q, Qc}

slide-29
SLIDE 29

19

SOS relaxation over C

ωC = sup

p

  • a,b,s,t

fabstpa

s · pb t

slide-30
SLIDE 30

19

SOS relaxation over C

ωC = sup

p

  • a,b,s,t

fabstpa

s · pb t ◮ We can write this as POP:

◮ f((p, q)) :=

a,b,s,t fabstpa s · qb t ∈ R[p, q]

◮ K = {(p, q) | pa

s, qb t ≥ 0, a pa s = b qb t = 1}

◮ M(g) is archimedean

slide-31
SLIDE 31

19

SOS relaxation over C

ωC = sup

p

  • a,b,s,t

fabstpa

s · pb t ◮ We can write this as POP:

◮ f((p, q)) :=

a,b,s,t fabstpa s · qb t ∈ R[p, q]

◮ K = {(p, q) | pa

s, qb t ≥ 0, a pa s = b qb t = 1}

◮ M(g) is archimedean

◮ Hence

ωC = sup f(p, q); s.t. (p, q) ∈ K = inf a ∈ R s.t. a − f ≥ 0 on K = inf a ∈ R s.t. a − f ∈ M(g) (fsos) ≤ inf a ∈ R s.t. a − f ∈ M(g)t (ft)

◮ Converging hierarchy of SDP upper bounds

slide-32
SLIDE 32

20

SOS relaxation over Qc

ωQc = sup

  • a,b,s,t

fabstψT(Ea

s · F b t )ψ

slide-33
SLIDE 33

20

SOS relaxation over Qc

ωQc = sup

  • a,b,s,t

fabstψT(Ea

s · F b t )ψ ◮ We can write this as NC-POP:

◮ f(E, F) :=

a,b,s,t fabstEa s · F b t ∈ RE, F

◮ K = {(E, F) | Es, Ft 0,

a Ea s = b F b t = 1, [Ea s , F b t ] = 0}

◮ Mnc(g) is archimedean

slide-34
SLIDE 34

20

SOS relaxation over Qc

ωQc = sup

  • a,b,s,t

fabstψT(Ea

s · F b t )ψ ◮ We can write this as NC-POP:

◮ f(E, F) :=

a,b,s,t fabstEa s · F b t ∈ RE, F

◮ K = {(E, F) | Es, Ft 0,

a Ea s = b F b t = 1, [Ea s , F b t ] = 0}

◮ Mnc(g) is archimedean

◮ Hence

ωC = sup ψTf(E, F)ψ; s.t. (E, F) ∈ K = inf a ∈ R s.t. a − f 0 on K = inf a ∈ R s.t. a − f ∈ Mnc(g) (fsos) ≤ inf a ∈ R s.t. a − f ∈ Mnc(g)t (ft)

◮ Converging hierarchy of SDP upper bounds

slide-35
SLIDE 35

21

SOS relaxation over Q

ωQ = sup

  • a,b,s,t

fabst Tr(Ea

s ⊗ F b t ) ◮ Cameron et al.: For most games we have p(a, b | s, t) = Tr(˜

Ea

s ˜

F b

t )

with ˜ Ea

s , ˜

F b

t 0, a ˜

Ea

s = b ˜

F b

t = D with Tr(D2) = 1

slide-36
SLIDE 36

21

SOS relaxation over Q

ωQ = sup

  • a,b,s,t

fabst Tr(Ea

s ⊗ F b t ) ◮ Cameron et al.: For most games we have p(a, b | s, t) = Tr(˜

Ea

s ˜

F b

t )

with ˜ Ea

s , ˜

F b

t 0, a ˜

Ea

s = b ˜

F b

t = D with Tr(D2) = 1 ◮ We can write this as NC-POP:

◮ f(E, F) :=

a,b,s,t fabstEa s · F b t ∈ RE, F

◮ K = {(E, F) | Es, Ft 0,

a Ea s = b F b t = D, Tr(D2) = 1}

◮ Hence

ωC = sup Tr f(E, F); s.t. (E, F, D) ∈ K ≤ inf a ∈ R s.t. a − f ∈ Mtr(g) ≤ inf a ∈ R s.t. a − f ∈ Mtr(g)t

◮ Converging sequence of upper SDP bounds

slide-37
SLIDE 37

22

CHSH Game

◮ Questions S = T = {0, 1}, Answers A = B = {0, 1} t s

a b

◮ Alice & Bob win, if a + b ≡ st mod 2

slide-38
SLIDE 38

22

CHSH Game

◮ Questions S = T = {0, 1}, Answers A = B = {0, 1} t s

a b

◮ Alice & Bob win, if a + b ≡ st mod 2 ◮ ωC = 3 4 ◮ ωQ = ωQc = 1 2 + 1 2 √ 2 ≈ 0.854 ◮ 1st level of SOS hierarchies are exact

slide-39
SLIDE 39

22

CHSH Game

◮ Questions S = T = {0, 1}, Answers A = B = {0, 1} t s

a b

◮ Alice & Bob win, if a + b ≡ st mod 2 ◮ ωC = 3 4 ◮ ωQ = ωQc = 1 2 + 1 2 √ 2 ≈ 0.854 ◮ 1st level of SOS hierarchies are exact ◮ Alternative formulation: ◮ 2 measurements with 2 outcomes each: E0 s , E1 s , F 0 t , F 1 t ◮ Setting Es := E0 s − E1 s , Ft := F 0 t − F 1 t one obtains the

CHSH inequality fCHSH := E0F0 + E0F1 + E1F0 − E1F1

◮ Optimizing fCHSH over variants of C, Q give ωC, ωQ

slide-40
SLIDE 40

23

I3322 inequality

◮ Questions S = T = {0, 1, 2}, Answers A = B = {0, 1}

f :=E0F0 + E0F1 + E0F2 + E1F0 + E1F1 − E1F3 + E2F0 − E2F1 − E0 − 2F0 − F1

slide-41
SLIDE 41

23

I3322 inequality

◮ Questions S = T = {0, 1, 2}, Answers A = B = {0, 1}

f :=E0F0 + E0F1 + E0F2 + E1F0 + E1F1 − E1F3 + E2F0 − E2F1 − E0 − 2F0 − F1

◮ Maximizing over C: f∗ ≤ 0 ◮ Best lower bound: 0.250875384

slide-42
SLIDE 42

23

I3322 inequality

◮ Questions S = T = {0, 1, 2}, Answers A = B = {0, 1}

f :=E0F0 + E0F1 + E0F2 + E1F0 + E1F1 − E1F3 + E2F0 − E2F1 − E0 − 2F0 − F1

◮ Maximizing over C: f∗ ≤ 0 ◮ Best lower bound: 0.250875384 ◮ NC-SOS upper bounds:

level psd trace 1 0.375 0.375 2 0.25094006 0.2509397 3 0.25087556 0.2508754

◮ Pal & Vertesi computed (eigenvalue) SOS-bounds for 240 Bell

inequalities of which 20 are not matching (≥ 10−4) the lower

  • bound. 4 of them get exact (≤ 10−8) using trace SOS-bounds,

about 1/2 of them improve

slide-43
SLIDE 43

24

Quantum coloring as feasibility problem

slide-44
SLIDE 44

24

Quantum coloring as feasibility problem

  • i∈[t]

xi

u = 1

∀u ∈ V(G), xi

uxj u = 0

∀ i = j, ∀u ∈ V(G), xi

uxi v = 0

∀uv ∈ E(G) χ(G) = min t ∈ N s.t. xi

u ∈ {0, 1}, u ∈ V(G), i ∈ [t],

slide-45
SLIDE 45

24

Quantum coloring as feasibility problem

  • i∈[t]

xi

u = 1

∀u ∈ V(G), xi

uxj u = 0

∀ i = j, ∀u ∈ V(G), (∗) xi

uxi v = 0

∀uv ∈ E(G) (xi

u)2 = xi u

∀u ∈ V(G), i ∈ [t] χq(G) = min t ∈ N s.t. xi

u 0, u ∈ V(G), i ∈ [t],

slide-46
SLIDE 46

24

Quantum coloring as feasibility problem

  • i∈[t]

xi

u = 1

∀u ∈ V(G), xi

uxj u = 0

∀ i = j, ∀u ∈ V(G), (∗) xi

uxi v = 0

∀uv ∈ E(G) (xi

u)2 = xi u

∀u ∈ V(G), i ∈ [t] χq(G) = min t ∈ N s.t. xi

u 0, u ∈ V(G), i ∈ [t], ◮ We can write this as

min t ∈ N s.t. ∃ operator solution of (∗)

slide-47
SLIDE 47

25

Nullstellensätze

Let g1, . . . , gr ∈ C[X]

Theorem (weak Nullstellensatz)

Let I = (g1, . . . , gr), V(I) := {a ∈ Cn | g1(a) = · · · = gr(a) = 0}. Then V(I) = ∅ ⇔ 1 ∈ I.

slide-48
SLIDE 48

25

Nullstellensätze

Let g1, . . . , gr ∈ C[X]

Theorem (weak Nullstellensatz)

Let I = (g1, . . . , gr), V(I) := {a ∈ Cn | g1(a) = · · · = gr(a) = 0}. Then V(I) = ∅ ⇔ 1 ∈ I. Let g1, . . . , gr ∈ CX

Theorem (Amitsur Nullstellensatz)

Let Z(I) := {A ∈ Rn | R primitive ring , g1(A) = · · · = gr(A) = 0}. Then Z(I) = ∅ ⇔ 1 ∈ (g1, . . . , gr).

◮ We have an algorithm to compute NC Gröbner bases, but it might

not terminate...

slide-49
SLIDE 49

26

Against all odds...1

◮ Gröbner basis: 4 ≤ χq(G13)

1with Piovesan, Mancinska, Roberson

slide-50
SLIDE 50

26

Against all odds...1

◮ Gröbner basis: 4 ≤ χq(G13)≤ χ(G13) = 4 ◮ Consequence χq(G14) = 4 < 5 = χ(G14)

1with Piovesan, Mancinska, Roberson

slide-51
SLIDE 51

27

Final Remarks

◮ Quantum theory gives archimedean property for NC-SOS

relaxations

◮ dual side (linear forms & moments) offers even more bounds

(Laurent et al.)

◮ We can transfer the flatness machinery & might obtain concrete

  • ptimizer/strategies
slide-52
SLIDE 52

27

Final Remarks

◮ Quantum theory gives archimedean property for NC-SOS

relaxations

◮ dual side (linear forms & moments) offers even more bounds

(Laurent et al.)

◮ We can transfer the flatness machinery & might obtain concrete

  • ptimizer/strategies

Open problems

◮ What is the geometry of (quantum) correlations? ◮ Is there always a finite dimensional solution/strategy for a finite

game?

◮ How can we detect optimality if there is no finite dimensional

solution?

slide-53
SLIDE 53

27

Final Remarks

◮ Quantum theory gives archimedean property for NC-SOS

relaxations

◮ dual side (linear forms & moments) offers even more bounds

(Laurent et al.)

◮ We can transfer the flatness machinery & might obtain concrete

  • ptimizer/strategies

Open problems

◮ What is the geometry of (quantum) correlations? ◮ Is there always a finite dimensional solution/strategy for a finite

game?

◮ How can we detect optimality if there is no finite dimensional

solution?

Thank you for your attention.

slide-54
SLIDE 54

POEMA

Polynomial Optimization, Efficiency through Moments and Algebra

Marie Sk lodowska-Curie Innovative Training Network

2019-2022

POEMA network goal is to train scientists at the interplay of algebra, ge-

  • metry and computer science for polynomial optimization problems and to

foster scientific and technological advances, stimulating interdisciplinary and intersectoriality knowledge exchange between algebraists, geometers, com- puter scientists and industrial actors facing real-life optimization problems.

Partners:

1 Inria, Sophia Antipolis, France (Bernard Mourrain) 2 CNRS, LAAS, Toulouse, France (Didier Henrion) 3 Sorbonne Universit´ e, Paris, France (Mohab Safey el Din) 4 NWO-I/CWI, Amsterdam, the Netherlands (Monique Laurent) 5

  • Univ. Tilburg, the Netherlands (Etienne de Klerk)

6

  • Univ. Konstanz, Germany (Markus Schweighofer)

7

  • Univ. degli Studi di Firenze, Italy (Giorgio Ottaviani)

8

  • Univ. of Birmingham, UK (Mikal Koˇ

cvara) 9 F.A. Univ. Erlangen-Nuremberg, Germany (Michael Stingl) 10

  • Univ. of Tromsoe, Norway (Cordian Riener)

11 Artelys SA, Paris, France (Arnaud Renaud)

Associate partners:

1 IBM Research, Ireland (Martin Mevissen) 2 NAG, UK (Mike Dewar) 3 RTE, France (Jean Maeght)

15 PhD positions available from

  • Sep. 1st 2019

Contact: bernard.mourrain@inria.fr, the

partner leaders, www-sop.inria.fr/members/Bernard.Mourrain/ announces/POEMA/