OPTIMAL QUANTUM LEARNING AND MULTIROUND REFERENCE FRAME ALIGNMENT - - PowerPoint PPT Presentation

optimal quantum learning and multiround reference frame
SMART_READER_LITE
LIVE PREVIEW

OPTIMAL QUANTUM LEARNING AND MULTIROUND REFERENCE FRAME ALIGNMENT - - PowerPoint PPT Presentation

OPTIMAL QUANTUM LEARNING AND MULTIROUND REFERENCE FRAME ALIGNMENT Giulio Chiribe lm a Joint works with G M DAriano, P Perinotti, A Bisio, and S Facchini Quantum Information Theory Group Pavia University work supported by the EC project


slide-1
SLIDE 1

OPTIMAL QUANTUM LEARNING AND MULTIROUND REFERENCE FRAME ALIGNMENT

Giulio Chiribelma Joint works with G M D’Ariano, P Perinotti, A Bisio, and S Facchini Quantum Information Theory Group Pavia University work supported by the EC project CORNER

DEX-SMI W

  • rkshop on Quantum Statistical Inference, National

Institute for Informatics, Tokyo, 2-4 March 2009

slide-2
SLIDE 2
  • Optimal quantum learning of a unitary transformation

from finite examples (arXiv:0903.0543v1 )

  • Optimal correction of an unknown rotation

(a little variation on the theme of quantum learning)

  • Multi-round and adaptive alignment of reference frames

equivalence of backward communication with forward communication of charge-conjugate particles

OUTLINE

slide-3
SLIDE 3

OPTIMAL QUANTUM LEARNING: WHAT IS IT ABOUT

slide-4
SLIDE 4

LEARNING AN UNKNOWN FUNCTION

Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points getting outcomes

x1, . . . , xN y1, . . . , yN yN . . . y2 y1

f

slide-5
SLIDE 5

. . .

LEARNING AN UNKNOWN FUNCTION

Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points getting outcomes

x1, . . . , xN y1, . . . , yN x1 x2 xN yN . . . y2 y1

f

slide-6
SLIDE 6

. . .

LEARNING AN UNKNOWN FUNCTION

Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points getting outcomes

x1, . . . , xN y1, . . . , yN

Subsequently, we are asked to compute f on a new point x, without using the black box

x1 x2 xN yN . . . y2 y1

f

slide-7
SLIDE 7

. . .

LEARNING AN UNKNOWN FUNCTION

Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points getting outcomes

x1, . . . , xN y1, . . . , yN

Subsequently, we are asked to compute f on a new point x, without using the black box

x1 x2 xN yN . . . y2 y1

f

x

slide-8
SLIDE 8

. . .

LEARNING AN UNKNOWN FUNCTION

Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points getting outcomes

x1, . . . , xN y1, . . . , yN

Subsequently, we are asked to compute f on a new point x, without using the black box

x1 x2 xN yN . . . y2 y1

f

f(x) = ?

slide-9
SLIDE 9

. . .

LEARNING AN UNKNOWN FUNCTION

Problem: a black box computes an unknown function y = f(x) We can evaluate f on a finite set of points getting outcomes

x1, . . . , xN y1, . . . , yN

Subsequently, we are asked to compute f on a new point x, without using the black box

x1 x2 xN yN . . . y2 y1

f

f(x) = ?

In classical computer science, statistical learning provides several efficient solutions for this problem

slide-10
SLIDE 10

. . . yN y1

CLASSICAL NETWORKS FOR LEARNING

Comparing x with f(x) for N times is not the only possibility: this just corresponds to the parallel configuration

f f

. . .

slide-11
SLIDE 11

. . . yN y1 . . . x1 xN

CLASSICAL NETWORKS FOR LEARNING

Comparing x with f(x) for N times is not the only possibility: this just corresponds to the parallel configuration

f f

. . .

slide-12
SLIDE 12

. . . yN y1 . . . x1 xN

CLASSICAL NETWORKS FOR LEARNING

Comparing x with f(x) for N times is not the only possibility: this just corresponds to the parallel configuration

f f

. . .

g3

where are known functions

f f f g1

g2

g1, g2, . . . , gN

To learn better, one could use a sequential network:

slide-13
SLIDE 13

OPTIMIZATION PROBLEM

Find the optimal strategy to learn an unknown function This means:

  • find the best network
  • find the best input X
  • for outcome Y, find the optimal guess

→ F = gN ◦ f ◦ · · · ◦ g2 ◦ f ◦ g1 ◦ f → Y = F(X)

  • Difference with estimation of the function f

Estimation corresponds to the special case In general, the optimal guess does not have to be in .

F0 ˆ f ∈ F0 ˆ f

Y → ˆ f

f ∈ F0

slide-14
SLIDE 14

FROM CLASSICAL TO QUANTUM LEARNING

  • Unknown function f unknown quantum channel
  • Classical network quantum network
  • Input X quantum state
  • Output Y quantum state

→ → →

ρin

ρout

C1

CN−1

ρin

E E E E

CN E

slide-15
SLIDE 15

GUESSING A CHANNEL FROM A STATE

  • Classical guess Quantum “guess”

Y → ˆ f ρout → ˆ E

R

ρout ρ

=

ˆ E

ρ

Physical implementation of the quantum guess: retrieving channel It retrieves the unknown transformation from the output state and performs it on a new state

R ρout ρ

slide-16
SLIDE 16

GUESSING A CHANNEL FROM A STATE

  • Classical guess Quantum “guess”

Y → ˆ f ρout → ˆ E

R

ρout ρ

=

ˆ E

ρ

Physical implementation of the quantum guess: retrieving channel It retrieves the unknown transformation from the output state and performs it on a new state

R ρout ρ

E

Target: implementing the unknown channel with maximum fidelity

ρ

slide-17
SLIDE 17

OPTIMAL QUANTUM LEARNING

Find the optimal strategy to learn an unknown channel This means:

  • find the best network
  • find the best input
  • find the optimal retrieving channel

Figure of merit: input-output fidelity

→ N = CN ◦ E ◦ · · · ◦ C2 ◦ E ◦ C1 ◦ E → ρout = N(ρin) ρin R E ∈ E0 → ˆ E(ρ) = R(ρ ⊗ ρout) F(E, ˆ E) =

  • dϕ F(E(ϕ), ˆ

E(ϕ)) F(ρ, σ) = Tr

1 2 σρ 1 2 ) 1 2

slide-18
SLIDE 18

“MEASURE-AND-PREPARE” SCHEMES

  • Particular scheme to retrieve the unknown transformation:
  • perform a measurement on the output state,
  • for outcome Y perform channel

In this case, the retrieving channel is:

Rmeas(ρ ⊗ ρout) =

  • Y

Tr[PY ρout] ˆ EY (ρ)

  • Particular measure-and-prepare scheme:

estimation of the channel In this case, one has

E ∈ E0 ˆ EY ∈ E0 ˆ EY

Estimation {measure-and-prepare schemes} {retrieving channels}

⊂ ∈

slide-19
SLIDE 19

LEARNING AN UNKNOWN UNITARY

Consider the case where the set of channels is a group of unitary transformations.

C1

ρin CN E0

U U U

Assuming a uniform prior for the unknown unitaries, we have the average fidelity

F =

  • dU F(U, CU)

R

=

CU

slide-20
SLIDE 20

HOW TO OPTIMIZE A QUANTUM NETWORK: QUANTUM COMBS

slide-21
SLIDE 21

CHOI-JAMIOLKOWSKI OPERATORS

Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS) For a unitary channel:

C = (C ⊗ I)(|I

  • I|)

|I =

  • n

|n|n

C

(U ⊗ I)(|I

  • I|) = |U
  • U|

|U = (U ⊗ I)|I

  • |I
slide-22
SLIDE 22

CHOI-JAMIOLKOWSKI OPERATORS

Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS) For a unitary channel:

C = (C ⊗ I)(|I

  • I|)

|I =

  • n

|n|n

C

(U ⊗ I)(|I

  • I|) = |U
  • U|

|U = (U ⊗ I)|I

  • |I
slide-23
SLIDE 23

CHOI-JAMIOLKOWSKI OPERATORS

Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS)

=

For a unitary channel:

C = (C ⊗ I)(|I

  • I|)

|I =

  • n

|n|n

C

(U ⊗ I)(|I

  • I|) = |U
  • U|

|U = (U ⊗ I)|I

  • |I
slide-24
SLIDE 24

CHOI-JAMIOLKOWSKI OPERATORS

Convenient representation of linear maps: Choi-Jamiolkowski-Belavkin-Staszewski operator (CJBS)

=

For a unitary channel:

C = (C ⊗ I)(|I

  • I|)

|I =

  • n

|n|n

C

C

(U ⊗ I)(|I

  • I|) = |U
  • U|

|U = (U ⊗ I)|I

  • |I
slide-25
SLIDE 25

LINK PRODUCT

Convenient representation of composition of linear maps: link product

Fcb ∗ Eba = Eba ∗ Fcb up to permutation of Hilbert spaces

F ◦ E ⇐ ⇒ Fcb ∗ Eba := Trb[(Fcb ⊗ Ia)(Ic ⊗ Eτb

ba)]

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

slide-26
SLIDE 26

LINK PRODUCT

Convenient representation of composition of linear maps: link product

E

a b

F

b c

Fcb ∗ Eba = Eba ∗ Fcb up to permutation of Hilbert spaces

F ◦ E ⇐ ⇒ Fcb ∗ Eba := Trb[(Fcb ⊗ Ia)(Ic ⊗ Eτb

ba)]

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

slide-27
SLIDE 27

LINK PRODUCT

Convenient representation of composition of linear maps: link product

E

a b

F

b c

Fcb ∗ Eba = Eba ∗ Fcb up to permutation of Hilbert spaces

F ◦ E ⇐ ⇒ Fcb ∗ Eba := Trb[(Fcb ⊗ Ia)(Ic ⊗ Eτb

ba)]

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

slide-28
SLIDE 28

LINK PRODUCT

=

Convenient representation of composition of linear maps: link product

E

a b

F

b c

Fcb ∗ Eba = Eba ∗ Fcb up to permutation of Hilbert spaces

F ◦ E ⇐ ⇒ Fcb ∗ Eba := Trb[(Fcb ⊗ Ia)(Ic ⊗ Eτb

ba)]

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

slide-29
SLIDE 29

LINK PRODUCT

=

Convenient representation of composition of linear maps: link product

E

a b

F

b c

E

a b

F

b’ c

Fcb ∗ Eba = Eba ∗ Fcb up to permutation of Hilbert spaces

F ◦ E ⇐ ⇒ Fcb ∗ Eba := Trb[(Fcb ⊗ Ia)(Ic ⊗ Eτb

ba)]

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

slide-30
SLIDE 30

LINK PRODUCT

=

Convenient representation of composition of linear maps: link product

E

a b

F

b c

E

a b

F

b’ c

Fcb ∗ Eba = Eba ∗ Fcb up to permutation of Hilbert spaces

F ◦ E ⇐ ⇒ Fcb ∗ Eba := Trb[(Fcb ⊗ Ia)(Ic ⊗ Eτb

ba)]

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

|I

slide-31
SLIDE 31

KNOWN FORMULAS IN TERMS OF LINK PRODUCT

  • Tensor product of states:

ρa ⊗ σb = ρa ∗ σb

  • Born statistical formula:

Tr[ρP] = ρa ∗ P τ

a

  • Transformation of states:

E(ρ) = Eout,in ∗ ρin

slide-32
SLIDE 32

KNOWN FORMULAS IN TERMS OF LINK PRODUCT

  • Tensor product of states:

ρa ⊗ σb = ρa ∗ σb

  • Born statistical formula:

Tr[ρP] = ρa ∗ P τ

a

  • Transformation of states:

E(ρ) = Eout,in ∗ ρin

States and transformations are treated on an equal footing.

slide-33
SLIDE 33

KNOWN FORMULAS IN TERMS OF LINK PRODUCT

  • Tensor product of states:

ρa ⊗ σb = ρa ∗ σb

  • Born statistical formula:

Tr[ρP] = ρa ∗ P τ

a

  • Transformation of states:

E(ρ) = Eout,in ∗ ρin

States and transformations are treated on an equal footing.

Is this a state

  • r a transformation?
slide-34
SLIDE 34

Quantum comb = sequential networks of quantum operations

QUANTUM COMBS

C1

C2

CN

CN−1

S(N) = CN ∗ · · · ∗ C2 ∗ C1

The quantum comb is represented by the Choi operator

slide-35
SLIDE 35

NORMALIZATION OF COMBS

  • Deterministic comb = network of channels

Recursive normalization of deterministic combs:

Tr2N−1[S(N)] = I2N−2 ⊗ S(N−1)

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 060401 (2008)

C1

C2

CN

CN−1 Optimize a network = optimize a positive operator under this constraint

slide-36
SLIDE 36

ROTATION OF COMBS

  • Rotation of input/output of a channel = rotation of the Choi operator

=

C

C

U†

V V

U∗

  • Rotation of inputs/outputs of a network = rotation of the comb

C1 CN

U† U†

V V

S(N) − → (V ⊗ U ∗)⊗NS(N)(V † ⊗ U τ)⊗N

slide-37
SLIDE 37

OPTIMIZATION OF LEARNING

slide-38
SLIDE 38

OPTIMIZATION OF LEARNING

U U U

slide-39
SLIDE 39

OPTIMIZATION OF LEARNING

U U U

C1

ρin CN

R

slide-40
SLIDE 40

OPTIMIZATION OF LEARNING

U U U

C1

ρin CN

R

L = R ∗ CN ∗ · · · ∗ C2 ∗ C1 ∗ ρin

Comb of the learning network: Fidelity:

F = 1 d2

  • dU

U| U ∗|⊗N| L |U |U ∗ ⊗N

We can always optimize over covariant combs:

[L, U ⊗ V ∗ ⊗ U ∗⊗N ⊗ V ⊗N] = 0 ∀U, V

slide-41
SLIDE 41

OPTIMALITY OF PARALLEL STRATEGIES

C1

ρin CN

U U U

=

slide-42
SLIDE 42

OPTIMALITY OF PARALLEL STRATEGIES

C1

ρin CN

U U U

=

C1

ρin CN

I I I

U⊗N

IA

= =

slide-43
SLIDE 43

OPTIMALITY OF PARALLEL STRATEGIES

C1

ρin CN

U U U

=

C1

ρin CN

I I I

U⊗N

IA

= =

U⊗N

IA

ρ

in

=

Any covariant network is equivalent to a parallel scheme with ancilla! Learning can be parallelized, in the same way as estimation (cf previous talk)

slide-44
SLIDE 44

OPTIMAL INPUT STATES

Decomposing the unitaries as

U ⊗N ⊗ IA =

  • J

(UJ ⊗ ImJ)

  • ne can prove that the optimal input states have the form

where is a maximally entangled state

|IJ ∈ H⊗2

J

|ψ =

  • J

aJ |IJ

  • √dJ

aJ ≥ 0

This is the same form of the optimal states for estimation of the unknown unitary U with N copies

GC, G M D’Ariano, and M F Sacchi, Phys. Rev. A 72, 043448 (2005).

slide-45
SLIDE 45

OPTIMAL RETRIEVING CHANNEL

Theorem: for any group of unitaries, for an input state of the optimal form

|ψ =

  • J

aJ |IJ

  • √dJ

aJ ≥ 0

is achieved by a “measure-and-prepare” scheme. Precisely, it is achieved by estimation of the unknown unitary U: for outcome , just perform the unitary

For the optimal POVM, see GC, G M D’Ariano, and M F Sacchi, Phys. Rev. A 72, 043448 (2005).

the optimal retrieving channel to extract U from the states

(U ⊗N ⊗ IA)|ψ =

  • J

aJ |UJ

  • √dJ

aJ ≥ 0 ˆ U ˆ U

slide-46
SLIDE 46

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

slide-47
SLIDE 47

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

ˆ U

R

ρout ρ

slide-48
SLIDE 48

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

ˆ U

R

ρout ρ

=

slide-49
SLIDE 49

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

ˆ U

R

ρout ρ

=

ρout

slide-50
SLIDE 50

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

ˆ U

R

ρout ρ

=

ρout P ˆ

U

slide-51
SLIDE 51

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

ˆ U

R

ρout ρ

=

ρout P ˆ

U

ρ

slide-52
SLIDE 52

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

ˆ U

R

ρout ρ

=

ρout P ˆ

U

ˆ U

ρ

slide-53
SLIDE 53

QUANTUM MEMORY DOES NOT IMPROVE LEARNING

ˆ U

R

ρout ρ

=

ρout P ˆ

U

ˆ U

ρ

Optimal retrieving is “measure-and-prepare”: no need of waiting for the input state We can measure immediately after having applied U, and store the outcome in a classical memory. What’s more, once we have measured, we can make as many copies as we want. On the contrary, a quantum memory would be degraded every time we access it.

ρ ˆ U

slide-54
SLIDE 54

STABILITY AND INSTABILITY OF OUR RESULT

slide-55
SLIDE 55

STABILITY AND INSTABILITY OF OUR RESULT

Our result is stable under the following variations:

  • learning from N to M copies with global fidelity: target

(optimality for single-copy fidelity is trivial)

  • N non-identical input unitaries and/or non-identical target unitaries
  • perform the inverse of U: target
  • any combination of the above things

U ⊗M U †

slide-56
SLIDE 56

STABILITY AND INSTABILITY OF OUR RESULT

Our result is not stable under the following variations:

  • learning general channels
  • learning unitaries that do not form a group
  • learning with restrictions on the available input states (entanglement)

Our result is stable under the following variations:

  • learning from N to M copies with global fidelity: target

(optimality for single-copy fidelity is trivial)

  • N non-identical input unitaries and/or non-identical target unitaries
  • perform the inverse of U: target
  • any combination of the above things

U ⊗M U †

slide-57
SLIDE 57

Consider the following correlated error model: Possible coding strategy:

  • use k particles to detect the unitary error
  • use the remaining (N-k) particles to carry the message

ERROR CORRECTION WITH CORRELATED NOISE

Problem: find the best decoding to maximize the fidelity between

DN(|ee|(k) ⊗ ρ(N−k)) R ◦D N(|ee|(k) ⊗ ρ(N−k)) and ρ(N−k) DN(ρ) =

  • G

dg U ⊗N

g

ρ U †⊗N

g

slide-58
SLIDE 58

The correction problem is equivalent to learning from k examples of U . We know that the optimal scheme is just estimation and preparation

OPTIMAL CORRECTION SCHEME

U †⊗(N−k)

In particular, the optimal states for error correction are the optimal states for estimation.

slide-59
SLIDE 59

The correction problem is equivalent to learning from k examples of U . We know that the optimal scheme is just estimation and preparation

OPTIMAL CORRECTION SCHEME

U †⊗(N−k)

In particular, the optimal states for error correction are the optimal states for estimation. for k =1, and for a maximum likelihood input state For SU(2) and U(1) the state assumed in arXiv:0812.5040 allows

psucc = 1 − α N

The optimality of measure-and-prepare retrieving has been also observed

|ψ ∝

  • J

|IJ

  • (aJ ∝
  • dJ in the optimal form)
slide-60
SLIDE 60

The max-likelihood state is not optimal for the fidelity

PRO AND CONTRA

The optimal state is |ψ ∝

N

  • n=0

sin nπ N

  • |n

for U(1) |ψ ∝

N/2

  • j=0

sin 2jπ N

  • |IJ
  • √2j + 1

for SU(2)

and gives fidelity

Fopt = 1 − β N 2

The max-likelihood state gives

F = 1 − γ N

On the other hand, the optimal state for fidelity does not allow probabilistically perfect error correction

slide-61
SLIDE 61

OPTIMAL MULTIROUND PROTOCOLS FOR REFERENCE FRAME ALIGNMENT

slide-62
SLIDE 62

QUANTUM GYROSCOPES

N qubits: Spin particle, rotation 1 2 g ∈ SO(3) g = (n, ϕ) Ug = eiϕn·σ = cos(ϕ/2) + i sin(ϕ/2)n · σ State change: |A ∈ H⊗N |Ag = U ⊗N

g

|A encodes a spatial direction: encode a Cartesian frame:

slide-63
SLIDE 63

QUANTUM GYROSCOPES

N qubits: Spin particle, rotation 1 2 g ∈ SO(3) g = (n, ϕ) Ug = eiϕn·σ = cos(ϕ/2) + i sin(ϕ/2)n · σ State change: |A ∈ H⊗N |Ag = U ⊗N

g

|A encodes a spatial direction: encode a Cartesian frame:

slide-64
SLIDE 64

QUANTUM GYROSCOPES

N qubits: Spin particle, rotation 1 2 g ∈ SO(3) g = (n, ϕ) Ug = eiϕn·σ = cos(ϕ/2) + i sin(ϕ/2)n · σ State change: |A ∈ H⊗N |Ag = U ⊗N

g

|A encodes a spatial direction: encode a Cartesian frame:

slide-65
SLIDE 65

ALIGNING AXES WITH QUANTUM GYROSCOPES

Suppose Alice and Bob have different Cartesian frames (different axes): a state that is for Alice is for Bob. However, using quantum communication they can try to establish a shared reference frame: |A Ug|A Alice

Alice

Bob

Bob

Problem: find the optimal quantum state and the optimal estimation strategy for aligning Cartesian frames

slide-66
SLIDE 66

ALIGNING AXES WITH QUANTUM GYROSCOPES

Suppose Alice and Bob have different Cartesian frames (different axes): a state that is for Alice is for Bob. However, using quantum communication they can try to establish a shared reference frame: |A Ug|A Alice

Alice

Bob

Bob

Problem: find the optimal quantum state and the optimal estimation strategy for aligning Cartesian frames

slide-67
SLIDE 67

ALIGNING AXES WITH QUANTUM GYROSCOPES

Suppose Alice and Bob have different Cartesian frames (different axes): a state that is for Alice is for Bob. However, using quantum communication they can try to establish a shared reference frame: |A Ug|A Alice

Alice

Bob

Bob

Problem: find the optimal quantum state and the optimal estimation strategy for aligning Cartesian frames

slide-68
SLIDE 68

ALIGNING AXES WITH QUANTUM GYROSCOPES

Suppose Alice and Bob have different Cartesian frames (different axes): a state that is for Alice is for Bob. However, using quantum communication they can try to establish a shared reference frame: |A Ug|A Alice

Alice

Bob

Bob

Problem: find the optimal quantum state and the optimal estimation strategy for aligning Cartesian frames

slide-69
SLIDE 69

ULTIMATE PRECISION LIMITS FOR N PARTICLES

  • For a quantum gyroscope made of N identical spin 1/2 particles:

c ≈

  • i=x,y,z

∆θ2

i = 3∆θ2 x ≈ 2π2

N 2

GC, D’Ariano, Perinotti, Sacchi, PRL 93, 180503 (2004) Bagan, Baig, Muñoz-Tapia, PRA 70, 030301 (2004) Hayashi, PLA 354, 183 (2006)

However, this result is provenly the optimal one

  • nly if we assume that Alice sends all particles in a single shot.

In other words, this result is about protocols with a single-round of forward quantum communication. What about multi-round protocols?

slide-70
SLIDE 70

MULTI-ROUND ALIGNMENT PROTOCOLS

  • For a quantum gyroscope made of N identical spin 1/2 particles:

Alice

Alice

Bob

Bob

Allow

  • unlimited amount of classical communication
  • k rounds of quantum communication, in which batches of spin 1/2

particles are sent. Then find the best way of estimating the mismatch of alignment.

slide-71
SLIDE 71

MULTI-ROUND ALIGNMENT PROTOCOLS

  • For a quantum gyroscope made of N identical spin 1/2 particles:

Alice

Alice

Bob

Bob

Allow

  • unlimited amount of classical communication
  • k rounds of quantum communication, in which batches of spin 1/2

particles are sent. Then find the best way of estimating the mismatch of alignment.

slide-72
SLIDE 72

QUANTUM COMB FORMULATION

Alice’s moves, in her description, are given by comb S In Bob’s description:

Sg = (U ⊗NA→B

g

⊗ U ∗⊗NB→A

g

⊗ IC)S(U †⊗NA→B

g

⊗ U τ∗⊗NB→A

g

⊗ IC)

Bob’s estimation strategy: tester Tˆ

g

slide-73
SLIDE 73

QUANTUM COMB FORMULATION

S

g

Alice’s moves, in her description, are given by comb S In Bob’s description:

Sg = (U ⊗NA→B

g

⊗ U ∗⊗NB→A

g

⊗ IC)S(U †⊗NA→B

g

⊗ U τ∗⊗NB→A

g

⊗ IC)

Bob’s estimation strategy: tester Tˆ

g

slide-74
SLIDE 74

QUANTUM TESTERS

EN−1 EN−2

E1 E0 C1

CN−1

ρ0

Pi

Quantum tester = network beginning with a state preparation and ending with a measurement = collection of positive operators with suitable normalization. Born rule for quantum networks:

pi = S ∗ Ti = Tr[S T τ

i ]

{Ti} Ti ≥ 0

  • i

Ti = deterministic comb

slide-75
SLIDE 75

OPTIMALITY OF COVARIANT TESTERS

  • minimizing the worst-case cost

{Sg = WgS0W †

g | g ∈ G}

invariant family of quantum combs with uniform prior dg left-invariant cost function

c(ˆ g, g) c(kˆ g, kg) = c(ˆ g, g) ∀k ∈ G

The optimal tester for

  • minimizing the average cost

c =

  • dg

g c(ˆ g, g) p(ˆ g|g) cwc = max

g

g c(ˆ g, g) p(ˆ g|g)

is covariant and

g =

g T0 W † ˆ g

τ copt = copt

wc

GC, G M D’Ariano, and P Perinotti, Phys. Rev. Lett. 101, 180501 (2008)

slide-76
SLIDE 76

DECOMPOSITION OF QUANTUM TESTERS

Theorem

Any tester can be split into two parts

  • a deterministic supermap transforming

quantum combs into states

  • an ordinary quantum measurement on the output states

pi = S ∗ Ti = T (S) ∗ Pi = Tr[T (S)P τ

i ]

{Pi} T (S) = T

1 2 S T 1 2

T =

  • i

T τ

i

slide-77
SLIDE 77

OPTIMALITY PROOF FOR ONE-WAY STRATEGIES

Decomposition of the tester: measurement on the quantum state where the output state is of the form

ρg = (U ⊗NA→B

g

⊗ U ∗⊗NB→A

g

⊗ IC) ρ0 (U ⊗NA→B

g

⊗ U ∗⊗NB→A

g

⊗ IC)†

g = (WgT0W † g )τ

Wg = U ⊗NA→B

g

⊗ U ∗⊗NB→A

g

⊗ IC

Since

[T, Wg] = 0 ∀g ∈ G T (S) = T

1 2 S T 1 2

T =

g T τ

ˆ g

But a state like this can be obtained in a single round!

slide-78
SLIDE 78

OPTIMALITY PROOF FOR ONE-WAY STRATEGIES

Ntot = NA→B + NB→A

  • particles and
  • charge-conjugate particles

NA→B NB→A

Theorem: For any multi-round protocol, there is a protocol with a single round of forward quantum communication from Alice to Bob, using that achieves the same average (or worst case) cost. G C, G M D’Ariano, and P Perinotti, Proc. QCMC 2008 (arXiv:0812.3922) In particular,

  • for quantum clocks G = U(1)
  • for quantum gyroscopes G = SU (2)

the only thing that matters is the total number of transmitted particles

slide-79
SLIDE 79

CONCLUSIONS

slide-80
SLIDE 80
  • the optimal learning of a group transformation is

“measure-and-prepare”

slide-81
SLIDE 81
  • the optimal learning of a group transformation is

“measure-and-prepare”

  • the optimal alignment of reference frames can be achieved

with a single round of quantum communication

slide-82
SLIDE 82
  • the optimal learning of a group transformation is

“measure-and-prepare”

  • the optimal alignment of reference frames can be achieved

with a single round of quantum communication

  • the proper way to solve these problem is the formalism of

quantum combs and testers.