Quantum Causal Structures Christian Majenz University of Copenhagen - - PowerPoint PPT Presentation

quantum causal structures
SMART_READER_LITE
LIVE PREVIEW

Quantum Causal Structures Christian Majenz University of Copenhagen - - PowerPoint PPT Presentation

Quantum Causal Structures Christian Majenz University of Copenhagen Joint work with Rafael Chaves and David Gross, University of Freiburg (arXiv:1407.3800) 04.12.2014 Motivation Question Question Can we rule out certain causal relationships


slide-1
SLIDE 1

Quantum Causal Structures

Christian Majenz

University of Copenhagen Joint work with Rafael Chaves and David Gross, University of Freiburg (arXiv:1407.3800)

04.12.2014

slide-2
SLIDE 2

Motivation

slide-3
SLIDE 3

Question

slide-4
SLIDE 4

Question

Can we rule out certain causal relationships by just looking at statistical data?

slide-5
SLIDE 5

Question

Can we rule out certain causal relationships by just looking at statistical data? Example:

Smoking Lung Cancer Gene

  • bserved
  • bserved

unobserved

slide-6
SLIDE 6

Question

Can we rule out certain causal relationships by just looking at statistical data? Example:

Smoking Lung Cancer Gene

  • bserved
  • bserved

unobserved

slide-7
SLIDE 7

Answer: Causal Inference

◮ Rigorous mathematical way to distinguish different causal

relations

slide-8
SLIDE 8

Answer: Causal Inference

◮ Rigorous mathematical way to distinguish different causal

relations

◮ Introduced by Pearl in the 1980s

slide-9
SLIDE 9

Answer: Causal Inference

◮ Rigorous mathematical way to distinguish different causal

relations

◮ Introduced by Pearl in the 1980s ◮ Necessary conditions for probability distributions to come from

more restrictive causal model

slide-10
SLIDE 10

Answer: Causal Inference

◮ Rigorous mathematical way to distinguish different causal

relations

◮ Introduced by Pearl in the 1980s ◮ Necessary conditions for probability distributions to come from

more restrictive causal model

◮ Tools: Bayesian networks, entropies, convexity

slide-11
SLIDE 11

Answer: Causal Inference

◮ Rigorous mathematical way to distinguish different causal

relations

◮ Introduced by Pearl in the 1980s ◮ Necessary conditions for probability distributions to come from

more restrictive causal model

◮ Tools: Bayesian networks, entropies, convexity

[?] What if the underlying processes are quantum?

slide-12
SLIDE 12

Answer: Causal Inference

◮ Rigorous mathematical way to distinguish different causal

relations

◮ Introduced by Pearl in the 1980s ◮ Necessary conditions for probability distributions to come from

more restrictive causal model

◮ Tools: Bayesian networks, entropies, convexity

[?] What if the underlying processes are quantum?

◮ Bell nonlocality etc. special cases of this

slide-13
SLIDE 13

Structure

Motivation Classical Bayesian Networks Entropic Description Quantum Quantum Causal Structure Entropic Description Application Information Causality Computational Techniques

slide-14
SLIDE 14

Classical

slide-15
SLIDE 15

DAGs

◮ Any directed acyclic graph (DAG) specifies a causal structure:

1 2 3 4 5 6

slide-16
SLIDE 16

DAGs

◮ Any directed acyclic graph (DAG) specifies a causal structure:

1 2 3 4 5 6

◮ Directed graph: G = (V , E) with E ⊂ V × V

slide-17
SLIDE 17

DAGs

◮ Any directed acyclic graph (DAG) specifies a causal structure:

1 2 3 4 5 6

◮ Directed graph: G = (V , E) with E ⊂ V × V ◮ Acyclic: no cycle (causality)

slide-18
SLIDE 18

DAGs

◮ Any directed acyclic graph (DAG) specifies a causal structure:

1 2 3 4 5 6

◮ Directed graph: G = (V , E) with E ⊂ V × V ◮ Acyclic: no cycle (causality) ◮ parents of v: pa(v) = {w ∈ V |(w, v) ∈ E}

slide-19
SLIDE 19

DAGs

◮ Any directed acyclic graph (DAG) specifies a causal structure:

1 2 3 4 5 6

◮ Directed graph: G = (V , E) with E ⊂ V × V ◮ Acyclic: no cycle (causality) ◮ parents of v: pa(v) = {w ∈ V |(w, v) ∈ E} ◮ children, ancestors, descendants, non-descendants etc.

slide-20
SLIDE 20

Bayesian networks

Definition (Bayesian network)

Let G = (V , E) be a DAG, X = (Xv)v∈V a collection of random variables indexed by V with probability distribution p on X V . Then p is called Markov w.r.t. G if p(x1, ..., xn) =

  • i∈V

p

  • xi|xpa(i)
  • ,

assuming WLOG V = {1, ..., n}. In that case G together with X is called a Bayesian network.

slide-21
SLIDE 21

Conditional independence

◮ equivalent characterization: v⊥nd(v)|pa(v)

slide-22
SLIDE 22

Conditional independence

◮ equivalent characterization: v⊥nd(v)|pa(v) ◮ example:

X → Y → Z = ⇒ Z⊥X|Y ⇔ p(x, y, z)p(y) = p(x, y)p(y, z)

slide-23
SLIDE 23

Conditional independence

◮ equivalent characterization: v⊥nd(v)|pa(v) ◮ example:

X → Y → Z = ⇒ Z⊥X|Y ⇔ p(x, y, z)p(y) = p(x, y)p(y, z)

◮ formalism requires access to joint probability distribution

slide-24
SLIDE 24

Conditional independence

◮ equivalent characterization: v⊥nd(v)|pa(v) ◮ example:

X → Y → Z = ⇒ Z⊥X|Y ⇔ p(x, y, z)p(y) = p(x, y)p(y, z)

◮ formalism requires access to joint probability distribution

= ⇒ obstacle for quantum generalization

slide-25
SLIDE 25

Conditional independence

◮ equivalent characterization: v⊥nd(v)|pa(v) ◮ example:

X → Y → Z = ⇒ Z⊥X|Y ⇔ p(x, y, z)p(y) = p(x, y)p(y, z)

◮ formalism requires access to joint probability distribution

= ⇒ obstacle for quantum generalization

◮ hard algebraic equations

slide-26
SLIDE 26

Entropies

slide-27
SLIDE 27

Entropies

X random variable (RV) on a finite alphabet X probability distribution p

slide-28
SLIDE 28

Entropies

X random variable (RV) on a finite alphabet X probability distribution p Shannon entropy: H(X) = −

  • x∈X

p(x) log p(x)

slide-29
SLIDE 29

Entropies

X random variable (RV) on a finite alphabet X probability distribution p Shannon entropy: H(X) = −

  • x∈X

p(x) log p(x) ρ ∈ S(H) mixed state on a finite-dimensional Hilbert space H = Cd

slide-30
SLIDE 30

Entropies

X random variable (RV) on a finite alphabet X probability distribution p Shannon entropy: H(X) = −

  • x∈X

p(x) log p(x) ρ ∈ S(H) mixed state on a finite-dimensional Hilbert space H = Cd Von Neumann entropy S(ρ) = −trρ log ρ

slide-31
SLIDE 31

Multiparticle Entropies

Classical:

◮ Collection of RV’s X = (X1, ..., Xn)

slide-32
SLIDE 32

Multiparticle Entropies

Classical:

◮ Collection of RV’s X = (X1, ..., Xn)

XI, I ⊂ {1, ..., n}

slide-33
SLIDE 33

Multiparticle Entropies

Classical:

◮ Collection of RV’s X = (X1, ..., Xn)

XI, I ⊂ {1, ..., n}

◮ Shannon entropy vector: h(X) = (H(XI))I⊂{1,...,n} ∈ R2n

slide-34
SLIDE 34

Multiparticle Entropies

Classical:

◮ Collection of RV’s X = (X1, ..., Xn)

XI, I ⊂ {1, ..., n}

◮ Shannon entropy vector: h(X) = (H(XI))I⊂{1,...,n} ∈ R2n

Quantum:

◮ Multipartite state ρ ∈ S(H) with H = H1 ⊗ ... ⊗ Hn

slide-35
SLIDE 35

Multiparticle Entropies

Classical:

◮ Collection of RV’s X = (X1, ..., Xn)

XI, I ⊂ {1, ..., n}

◮ Shannon entropy vector: h(X) = (H(XI))I⊂{1,...,n} ∈ R2n

Quantum:

◮ Multipartite state ρ ∈ S(H) with H = H1 ⊗ ... ⊗ Hn

ρI = trIc ρ

slide-36
SLIDE 36

Multiparticle Entropies

Classical:

◮ Collection of RV’s X = (X1, ..., Xn)

XI, I ⊂ {1, ..., n}

◮ Shannon entropy vector: h(X) = (H(XI))I⊂{1,...,n} ∈ R2n

Quantum:

◮ Multipartite state ρ ∈ S(H) with H = H1 ⊗ ... ⊗ Hn

ρI = trIc ρ

◮ Von Neumann entropy vector: s(ρ) = (S(ρI))I⊂{1,...,n} ∈ R2n

slide-37
SLIDE 37

Convex cones

slide-38
SLIDE 38

Convex cones

What is a convex cone?

slide-39
SLIDE 39

Convex cones

What is a convex cone?

slide-40
SLIDE 40

Convex cones

What is a convex cone?

slide-41
SLIDE 41

Entropy Cones

Sets of all entropy vectors:

slide-42
SLIDE 42

Entropy Cones

Sets of all entropy vectors: Classical Σn = {h(X)|X n-party random variable}

slide-43
SLIDE 43

Entropy Cones

Sets of all entropy vectors: Classical Σn = {h(X)|X n-party random variable} Quantum Γn = {s(ρ)|ρ n-party quantum state}

slide-44
SLIDE 44

Entropy Cones

Sets of all entropy vectors: Classical Σn = {h(X)|X n-party random variable} Quantum Γn = {s(ρ)|ρ n-party quantum state} Σn as well as Γn are convex cones (Zhang, Yeung, 1997 and Pippenger, 2003)

slide-45
SLIDE 45

Information Inequalities

Describe convex cone via linear inequalities

slide-46
SLIDE 46

Information Inequalities

Describe convex cone via linear inequalities

slide-47
SLIDE 47

Information Inequalities

Describe convex cone via linear inequalities Inequalities for Σn: information inequalities

slide-48
SLIDE 48

Information Inequalities

Describe convex cone via linear inequalities Inequalities for Σn: information inequalities Example: monotonicity H(X1X2) − H(X2) ≥ 0

slide-49
SLIDE 49

Information Inequalities

Describe convex cone via linear inequalities Inequalities for Σn: information inequalities Example: monotonicity H(X1X2) − H(X2) ≥ 0 Inequalities for Γn: quantum information inequalities

slide-50
SLIDE 50

Information Inequalities

Describe convex cone via linear inequalities Inequalities for Σn: information inequalities Example: monotonicity H(X1X2) − H(X2) ≥ 0 Inequalities for Γn: quantum information inequalities Example: weak monotonicity S(ρ12) + S(ρ13) − S(ρ2) − S(ρ3) ≥ 0

slide-51
SLIDE 51

Information Inequalities

Describe convex cone via linear inequalities Inequalities for Σn: information inequalities Example: monotonicity H(X1X2) − H(X2) ≥ 0 Inequalities for Γn: quantum information inequalities Example: weak monotonicity S(ρ12) + S(ρ13) − S(ρ2) − S(ρ3) ≥ 0 known (quantum) information inequalities provide outer approximation of entropy cones

slide-52
SLIDE 52

Entropy and Bayesian networks

◮ Consider entropy cone of RVs from Bayesian network

slide-53
SLIDE 53

Entropy and Bayesian networks

◮ Consider entropy cone of RVs from Bayesian network ◮ Conditonal independence: Z⊥X|Y ⇔ I(X : Z|Y ) = 0

slide-54
SLIDE 54

Entropy and Bayesian networks

◮ Consider entropy cone of RVs from Bayesian network ◮ Conditonal independence: Z⊥X|Y ⇔ I(X : Z|Y ) = 0 ◮ Conditional Mutual information

I(X : Z|Y ) = H(XY ) + H(YZ) − H(XZY ) − H(Y )

slide-55
SLIDE 55

Entropy and Bayesian networks

◮ Consider entropy cone of RVs from Bayesian network ◮ Conditonal independence: Z⊥X|Y ⇔ I(X : Z|Y ) = 0 ◮ Conditional Mutual information

I(X : Z|Y ) = H(XY ) + H(YZ) − H(XZY ) − H(Y )

◮ Linear equation

slide-56
SLIDE 56

Entropy and Bayesian networks

◮ Consider entropy cone of RVs from Bayesian network ◮ Conditonal independence: Z⊥X|Y ⇔ I(X : Z|Y ) = 0 ◮ Conditional Mutual information

I(X : Z|Y ) = H(XY ) + H(YZ) − H(XZY ) − H(Y )

◮ Linear equation ◮ intersection with entropy cone is convex cone

slide-57
SLIDE 57

Marginal scenario I

◮ Only some RVs are observed

slide-58
SLIDE 58

Marginal scenario I

◮ Only some RVs are observed ◮ Example:

slide-59
SLIDE 59

Marginal scenario I

◮ Only some RVs are observed ◮ Example:

→ Marginal scenario: {A, B, C}

slide-60
SLIDE 60

Marginal scenario I

◮ Only some RVs are observed ◮ Example:

→ Marginal scenario: {A, B, C}

◮ Plan: Remove unobserved variables from inequality

description of entropy cone

slide-61
SLIDE 61

Marginal scenario II

slide-62
SLIDE 62

Marginal scenario II

Definition (Marginal Scenario)

Let n ∈ N. A subset M ⊂ 2{1,...,n} such that for I ∈ M and J ⊂ I also J ∈ M is called marginal scenario.

slide-63
SLIDE 63

Marginal scenario II

Definition (Marginal Scenario)

Let n ∈ N. A subset M ⊂ 2{1,...,n} such that for I ∈ M and J ⊂ I also J ∈ M is called marginal scenario. Projection of entropy cone to a marginal scenario = ⇒ observable entropy cone.

slide-64
SLIDE 64

Quantum

slide-65
SLIDE 65

Quantum Causal Structures I

◮ DAG G = (V , E)

slide-66
SLIDE 66

Quantum Causal Structures I

◮ DAG G = (V , E) ◮ Sink nodes s ∈ V : Hilbert space Hs

slide-67
SLIDE 67

Quantum Causal Structures I

◮ DAG G = (V , E) ◮ Sink nodes s ∈ V : Hilbert space Hs ◮ Nodes v ∈ V with children: Hilbert space

Hv =

  • w∈V

(v,w)∈E

Hv,w

slide-68
SLIDE 68

Quantum Causal Structures I

◮ DAG G = (V , E) ◮ Sink nodes s ∈ V : Hilbert space Hs ◮ Nodes v ∈ V with children: Hilbert space

Hv =

  • w∈V

(v,w)∈E

Hv,w

H =

  • v∈V

Hv

slide-69
SLIDE 69

Quantum Causal Structures I

◮ DAG G = (V , E) ◮ Sink nodes s ∈ V : Hilbert space Hs ◮ Nodes v ∈ V with children: Hilbert space

Hv =

  • w∈V

(v,w)∈E

Hv,w

H =

  • v∈V

Hv

◮ Parent Hilbert space

Hpa(v) =

  • w∈V

(w,v)∈E

Hw,v

slide-70
SLIDE 70

Quantum Causal Structures II

◮ Initial product state ρ0 = q ρq on Hilbert spaces of source

nodes q

slide-71
SLIDE 71

Quantum Causal Structures II

◮ Initial product state ρ0 = q ρq on Hilbert spaces of source

nodes q

◮ CPTP maps Φv for each non-source node:

Φv : L

  • Hpa(v)
  • → L(Hv)
slide-72
SLIDE 72

Quantum Causal Structures II

◮ Initial product state ρ0 = q ρq on Hilbert spaces of source

nodes q

◮ CPTP maps Φv for each non-source node:

Φv : L

  • Hpa(v)
  • → L(Hv)

ρv = Φvρpa(v)

slide-73
SLIDE 73

Quantum Causal Structures II

◮ Initial product state ρ0 = q ρq on Hilbert spaces of source

nodes q

◮ CPTP maps Φv for each non-source node:

Φv : L

  • Hpa(v)
  • → L(Hv)

ρv = Φvρpa(v) [!] no global state

slide-74
SLIDE 74

Quantum Causal Structures II

◮ Initial product state ρ0 = q ρq on Hilbert spaces of source

nodes q

◮ CPTP maps Φv for each non-source node:

Φv : L

  • Hpa(v)
  • → L(Hv)

ρv = Φvρpa(v) [!] no global state

◮ want classical nodes: pick the right Φv

slide-75
SLIDE 75

Example

2 3 1

  • bserved
  • bserved

unobserved

◮ H = H1,2 ⊗ H1,3 ⊗ H2 ⊗ H3

slide-76
SLIDE 76

Example

2 3 1

  • bserved
  • bserved

unobserved

◮ H = H1,2 ⊗ H1,3 ⊗ H2 ⊗ H3 ◮ States on the coexisting subsets of systems:

ρ0 = ρ(1,2),(1,3) ρ(1,3),2 = (Φ2 ⊗ 1) ρ(1,2),(1,3) ρ(1,2),3 = (1 ⊗ Φ3) ρ(1,2),(1,3) ρ2,3 = (Φ2 ⊗ Φ3) ρ(1,2),(1,3)

slide-77
SLIDE 77

The entropy cone

◮ Look at entropy cone of states constructed like this

slide-78
SLIDE 78

The entropy cone

◮ Look at entropy cone of states constructed like this ◮ H is a tensor product of n = |E| + |Vs| hilbert spaces, Vs: Set

  • f sinks
slide-79
SLIDE 79

The entropy cone

◮ Look at entropy cone of states constructed like this ◮ H is a tensor product of n = |E| + |Vs| hilbert spaces, Vs: Set

  • f sinks

◮ Formally: Entropy vector v ∈ R2n

slide-80
SLIDE 80

The entropy cone

◮ Look at entropy cone of states constructed like this ◮ H is a tensor product of n = |E| + |Vs| hilbert spaces, Vs: Set

  • f sinks

◮ Formally: Entropy vector v ∈ R2n ◮ vI = S(ρI) if the state ρI exists

slide-81
SLIDE 81

The entropy cone

◮ Look at entropy cone of states constructed like this ◮ H is a tensor product of n = |E| + |Vs| hilbert spaces, Vs: Set

  • f sinks

◮ Formally: Entropy vector v ∈ R2n ◮ vI = S(ρI) if the state ρI exists ◮ vI arbitrary if ρI doesn’t exist

slide-82
SLIDE 82

The entropy cone

◮ Look at entropy cone of states constructed like this ◮ H is a tensor product of n = |E| + |Vs| hilbert spaces, Vs: Set

  • f sinks

◮ Formally: Entropy vector v ∈ R2n ◮ vI = S(ρI) if the state ρI exists ◮ vI arbitrary if ρI doesn’t exist ◮ for each J ⊂ E ∪ Vs of coexisting systems: quantum entropy

cone

slide-83
SLIDE 83

The entropy cone

◮ Look at entropy cone of states constructed like this ◮ H is a tensor product of n = |E| + |Vs| hilbert spaces, Vs: Set

  • f sinks

◮ Formally: Entropy vector v ∈ R2n ◮ vI = S(ρI) if the state ρI exists ◮ vI arbitrary if ρI doesn’t exist ◮ for each J ⊂ E ∪ Vs of coexisting systems: quantum entropy

cone

◮ extra monotonicities for classical systems

slide-84
SLIDE 84

Data processing inequality

◮ replace conditional independence relations?

slide-85
SLIDE 85

Data processing inequality

◮ replace conditional independence relations?

→ Data processing inequality: For a CPTP map Φ : HA → HB I(A : C|D) ≥ I(B : C|D) for any other systems C and D.

slide-86
SLIDE 86

Data processing inequality

◮ replace conditional independence relations?

→ Data processing inequality: For a CPTP map Φ : HA → HB I(A : C|D) ≥ I(B : C|D) for any other systems C and D.

◮ relates entropies of noncoexisting systems

slide-87
SLIDE 87

Data processing inequality

◮ replace conditional independence relations?

→ Data processing inequality: For a CPTP map Φ : HA → HB I(A : C|D) ≥ I(B : C|D) for any other systems C and D.

◮ relates entropies of noncoexisting systems ◮ replaces conditional independence

slide-88
SLIDE 88

Marginal Scenario

◮ Again, only some systems are observed

slide-89
SLIDE 89

Marginal Scenario

◮ Again, only some systems are observed ◮ Example:

slide-90
SLIDE 90

Marginal Scenario

◮ Again, only some systems are observed ◮ Example:

slide-91
SLIDE 91

Marginal Scenario

◮ Again, only some systems are observed ◮ Example:

slide-92
SLIDE 92

Marginal Scenario

◮ Again, only some systems are observed ◮ Example: ◮ most interesting marginal scenario: {A, B, C} & assume these

nodes classical

slide-93
SLIDE 93

Application

slide-94
SLIDE 94

Information Causality

The IC principle is defined by a game:

slide-95
SLIDE 95

Information Causality

The IC principle is defined by a game:

slide-96
SLIDE 96

Information Causality

The IC principle is defined by a game:

slide-97
SLIDE 97

Information Causality

The IC principle is defined by a game:

ΡAB

slide-98
SLIDE 98

Information Causality

The IC principle is defined by a game:

X1,X2

ΡAB

slide-99
SLIDE 99

Information Causality

The IC principle is defined by a game:

M

X1,X2

ΡAB

slide-100
SLIDE 100

Information Causality

The IC principle is defined by a game:

M

X1,X2

i

ΡAB

slide-101
SLIDE 101

Information Causality

The IC principle is defined by a game:

M

X1,X2

i

ΡAB Yi

slide-102
SLIDE 102

Information Causality

The IC principle is defined by a game:

M

X1,X2

i

ΡAB Yi

slide-103
SLIDE 103

Information Causality

◮ Corresponding DAG: X1 S X2 Y M AB

slide-104
SLIDE 104

Information Causality

◮ Corresponding DAG: X1 S X2 Y M AB ◮ Nodes Xi, Y , M, S are classical, only the AB node is quantum

slide-105
SLIDE 105

Information Causality

◮ Corresponding DAG: X1 S X2 Y M AB ◮ Nodes Xi, Y , M, S are classical, only the AB node is quantum ◮ Counterfactual reformulation: Yi = (Y |S = i)

slide-106
SLIDE 106

Information Causality

◮ Marginal scenario {X1, Y1}, {X2, Y2}, {M} yields original

information causality inequality I(X1 : Y1) + I(X2 : Y2) ≤ H(M)

slide-107
SLIDE 107

Information Causality

◮ Marginal scenario {X1, Y1}, {X2, Y2}, {M} yields original

information causality inequality I(X1 : Y1) + I(X2 : Y2) ≤ H(M) → More general marginal scenario {X1, X2, Y1, M}, {X1, X2, Y2, M}:

slide-108
SLIDE 108

Information Causality

◮ Marginal scenario {X1, Y1}, {X2, Y2}, {M} yields original

information causality inequality I(X1 : Y1) + I(X2 : Y2) ≤ H(M) → More general marginal scenario {X1, X2, Y1, M}, {X1, X2, Y2, M}:

X1 S X2 Y M AB

slide-109
SLIDE 109

Information Causality

◮ Marginal scenario {X1, Y1}, {X2, Y2}, {M} yields original

information causality inequality I(X1 : Y1) + I(X2 : Y2) ≤ H(M) → More general marginal scenario {X1, X2, Y1, M}, {X1, X2, Y2, M}: Strengthened information causality inequality I(X1 : Y1, M)+I(X2 : Y2, M)+I(X1 : X2|M) ≤ H(M)+I(X1 : X2)

slide-110
SLIDE 110

Information Causality

slide-111
SLIDE 111

Triangle scenario

1

slide-112
SLIDE 112

Triangle scenario

◮ Classical: e.g.

I(A : B) + I(A : C) ≤ H(A)

1

slide-113
SLIDE 113

Triangle scenario

◮ Classical: e.g.

I(A : B) + I(A : C) ≤ H(A)

◮ New framework =

⇒ also holds for quantum

1

slide-114
SLIDE 114

Triangle scenario

◮ Classical: e.g.

I(A : B) + I(A : C) ≤ H(A)

◮ New framework =

⇒ also holds for quantum

◮ independently proven by Henson et al., 2014 for non-signalling

resources1

1Henson, Joe, Raymond Lal, and Matthew F. Pusey. ”Theory-independent

limits on correlations from generalised Bayesian networks.” arXiv preprint arXiv:1405.2572 (2014).

slide-115
SLIDE 115

Triangle scenario

◮ Classical: e.g.

I(A : B) + I(A : C) ≤ H(A)

◮ New framework =

⇒ also holds for quantum

◮ independently proven by Henson et al., 2014 for non-signalling

resources1

◮ using their techniques =

⇒ causes connecting at most m of n nodes yield entropic inequality

1Henson, Joe, Raymond Lal, and Matthew F. Pusey. ”Theory-independent

limits on correlations from generalised Bayesian networks.” arXiv preprint arXiv:1405.2572 (2014).

slide-116
SLIDE 116

Computational Techniques

slide-117
SLIDE 117

Fourier-Motzkin Elimination

◮ Fancy name for removing variables from inequalities

slide-118
SLIDE 118

Fourier-Motzkin Elimination

◮ Fancy name for removing variables from inequalities ◮ Example: x ≤ 2y, z ≤ x + y, y ≤ x, remove y

slide-119
SLIDE 119

Fourier-Motzkin Elimination

◮ Fancy name for removing variables from inequalities ◮ Example: x ≤ 2y, z ≤ x + y, y ≤ x, remove y

= ⇒

1 2x ≤ x, z − x ≤ x

slide-120
SLIDE 120

Fourier-Motzkin Elimination

◮ Fancy name for removing variables from inequalities ◮ Example: x ≤ 2y, z ≤ x + y, y ≤ x, remove y

= ⇒

1 2x ≤ x, z − x ≤ x ◮ Problem: double exponential in number of variables...

slide-121
SLIDE 121

Fourier-Motzkin Elimination

◮ Fancy name for removing variables from inequalities ◮ Example: x ≤ 2y, z ≤ x + y, y ≤ x, remove y

= ⇒

1 2x ≤ x, z − x ≤ x ◮ Problem: double exponential in number of variables...

... i.e. triple exponential in number of nodes

slide-122
SLIDE 122

Fourier-Motzkin Elimination

◮ Fancy name for removing variables from inequalities ◮ Example: x ≤ 2y, z ≤ x + y, y ≤ x, remove y

= ⇒

1 2x ≤ x, z − x ≤ x ◮ Problem: double exponential in number of variables...

... i.e. triple exponential in number of nodes

◮ works for small instances (triangle, IC)

slide-123
SLIDE 123

Linear Programming

◮ Easier: check candidate inequality

slide-124
SLIDE 124

Linear Programming

◮ Easier: check candidate inequality ◮ formulation as LP:

minimize

  • I⊂{1,...,n}

αIvI subject to SSA, WM, DPs

slide-125
SLIDE 125

Linear Programming

◮ Easier: check candidate inequality ◮ formulation as LP:

minimize

  • I⊂{1,...,n}

αIvI subject to SSA, WM, DPs

◮ minimum is 0 if the inequality holds

slide-126
SLIDE 126

Linear Programming

◮ Easier: check candidate inequality ◮ formulation as LP:

minimize

  • I⊂{1,...,n}

αIvI subject to SSA, WM, DPs

◮ minimum is 0 if the inequality holds ◮ −∞ else

slide-127
SLIDE 127

Linear Programming

◮ Easier: check candidate inequality ◮ formulation as LP:

minimize

  • I⊂{1,...,n}

αIvI subject to SSA, WM, DPs

◮ minimum is 0 if the inequality holds ◮ −∞ else

[!] still exponential in # of nodes

slide-128
SLIDE 128

Summary

◮ Defined quantum causal structures

slide-129
SLIDE 129

Summary

◮ Defined quantum causal structures ◮ Algorithm for characterization

slide-130
SLIDE 130

Summary

◮ Defined quantum causal structures ◮ Algorithm for characterization ◮ Applications: strengthening of information causality, quantum

networks

slide-131
SLIDE 131

Thank you for your attention!