Measuring Entanglement Negativity with Neural Network Estimators - - PowerPoint PPT Presentation

measuring entanglement negativity with neural network
SMART_READER_LITE
LIVE PREVIEW

Measuring Entanglement Negativity with Neural Network Estimators - - PowerPoint PPT Presentation

Measuring Entanglement Negativity with Neural Network Estimators Johnnie Gray , Leonardo Banchi , , Abolfazl Bayat , Sougato Bose 6 Nov 2017 Imperial College, London, UK University College London, London, UK Machine


slide-1
SLIDE 1

Measuring Entanglement Negativity with Neural Network Estimators

Johnnie Gray†, Leonardo Banchi∗,†, Abolfazl Bayat†, Sougato Bose† 6 Nov 2017

∗ Imperial College, London, UK † University College London, London, UK

slide-2
SLIDE 2

Machine Learning Techniques in Quantum...

slide-3
SLIDE 3

Quick summary

Entanglement negativity is exceptional amongst entanglement measures

  • Once the state is known, it is “easy” to compute

Entanglement negativity is difficult to measure experimentally

  • it requires full state tomography — the number of measurements

grows exponentially with system size

slide-4
SLIDE 4

Quick summary

Entanglement negativity is exceptional amongst entanglement measures

  • Once the state is known, it is “easy” to compute

Entanglement negativity is difficult to measure experimentally

  • it requires full state tomography — the number of measurements

grows exponentially with system size We show that neural networks can be trained to accurately estimate the entanglement negativity with a polynomial number of measurements, using a few copies of the original system

slide-5
SLIDE 5

Entanglement

  • Entanglement is a form of quantum correlation with no classical

analogue

  • Entanglement is responsible for the huge dimensionality of the

quantum state space

  • It is the resource for quantum information protocols
  • But... it is very fragile!
slide-6
SLIDE 6

Entanglement

  • Entanglement is a form of quantum correlation with no classical

analogue

  • Entanglement is responsible for the huge dimensionality of the

quantum state space

  • It is the resource for quantum information protocols
  • But... it is very fragile!

How much entanglement do we have in our system?

slide-7
SLIDE 7

Entanglement

A B C |ψABC pure ρAB mixed

slide-8
SLIDE 8

Definitions

Logarithmic negativity for a generic mixed state ρAB quantifies the quantum entanglement between subsystems A and B. E = log2

  • ρTA

AB

  • = log2
  • ρTB

AB

  • = log2
  • k

|λk| where {λk} are the eigenvalues of ρTX

AB

A B C

slide-9
SLIDE 9

Eigenvalues of partially transposed density matrices

  • The {λk} are the roots of the characteristic polynomial,

P(λ)= det

  • λ − ρTB

AB

  • =

n cnλn, where each coefficient cn is a

polynomial function of the partially transposed (PT) moments: µm = Tr

  • (ρTB

AB)m

=

  • k

λm

k ,

Full information about the spectrum {λk} is contained in {µm}.

  • Since − 1

2 ≤ λk ≤ 1 the magnitude of the moments quickly decreases

with m.

slide-10
SLIDE 10

Eigenvalues of partially transposed density matrices

  • The {λk} are the roots of the characteristic polynomial,

P(λ)= det

  • λ − ρTB

AB

  • =

n cnλn, where each coefficient cn is a

polynomial function of the partially transposed (PT) moments: µm = Tr

  • (ρTB

AB)m

=

  • k

λm

k ,

Full information about the spectrum {λk} is contained in {µm}.

  • Since − 1

2 ≤ λk ≤ 1 the magnitude of the moments quickly decreases

with m. The first few moments carry the most significance, but... the relationship between moments and spectrum is unknown

slide-11
SLIDE 11

Schematics of the protocol

µm = Tr

  • (ρTB

AB)m

= Tr m

  • c=1

ρAcBc

  • (Pm)TB
  • Pm is any linear combination of cyclic permutation operators of order m.
slide-12
SLIDE 12

Schematics of the protocol

µm = Tr

  • (ρTB

AB)m

= Tr m

  • c=1

ρAcBc

  • (Pm)TB
  • Pm is any linear combination of cyclic permutation operators of order m.

A B C

1

2

2

1

(a) (b)

(a) Example set-up for the measurement of the moments (m = 3) (b) Equivalence between the moments µm and expectation of two

  • pposite permutations on A and B.
slide-13
SLIDE 13

Measuring the moments in spin lattices

A B C

1

2

2

1

(a) (b)

(i) Prepare m copies of the state ρAB;

slide-14
SLIDE 14

Measuring the moments in spin lattices

A B C

1

2

2

1

(a) (b)

(i) Prepare m copies of the state ρAB; (ii) Sequentially measure a ‘forward’ sequence of adjacent swaps, Sc,c+1

A

between neighbouring copies of system A from c = 1 to m − 1;

slide-15
SLIDE 15

Measuring the moments in spin lattices

A B C

1

2

2

1

(a) (b)

(i) Prepare m copies of the state ρAB; (ii) Sequentially measure a ‘forward’ sequence of adjacent swaps, Sc,c+1

A

between neighbouring copies of system A from c = 1 to m − 1; (iii) Sequentially measure a ‘backward’ sequence of adjacent swaps, Sc,c−1

B

between neighbouring copies of system B from c = m to 2;

slide-16
SLIDE 16

Measuring the moments in spin lattices

A B C

1

2

2

1

(a) (b)

(i) Prepare m copies of the state ρAB; (ii) Sequentially measure a ‘forward’ sequence of adjacent swaps, Sc,c+1

A

between neighbouring copies of system A from c = 1 to m − 1; (iii) Sequentially measure a ‘backward’ sequence of adjacent swaps, Sc,c−1

B

between neighbouring copies of system B from c = m to 2; (iv) Repeat these steps in order to yield an expectation value.

slide-17
SLIDE 17

Measuring the moments in bosonic lattices

(i) Prepare m copies of the state ρAB; (ii) Perform ‘forward’ Fourier transforms between modes in different copies for each site in A – this can be achieved using a series of beam splitters; (iii) Perform ‘backwards’ (reverse) Fourier transform between modes in different copies for each site in B, via reverse beam splitter transformations; (iv) Measure the boson occupation numbers nj,c on all sites j ∈ {A, B} and all copies c to compute φ = ei

j∈{A,B},c 2πcnj,c/m.

(v) Repeat these steps to obtain the expectation value µm as an average

  • f φ.
slide-18
SLIDE 18

Estimating the negativity

Problem OK, we can measure the moments µm using a polynomial number of measurements O[m(NA + NB)]. But how do we accurately estimate the negativity?

slide-19
SLIDE 19

Estimating the negativity

Problem OK, we can measure the moments µm using a polynomial number of measurements O[m(NA + NB)]. But how do we accurately estimate the negativity? Analytical theory: Chebyshev functional approximation E = log2 Tr f (ρTB

AB)

f (x) = |x| Idea: if we can find a polynomial expansion f (x) ≈ M

m=0 αmxm, then

E = log2

M

  • m=0

αmµm

slide-20
SLIDE 20

Estimating the negativity

Problem OK, we can measure the moments µm using a polynomial number of measurements O[m(NA + NB)]. But how do we accurately estimate the negativity? Analytical theory: Chebyshev functional approximation E = log2 Tr f (ρTB

AB)

f (x) = |x| Idea: if we can find a polynomial expansion f (x) ≈ M

m=0 αmxm, then

E = log2

M

  • m=0

αmµm Chebyshev expansion: f (x) ≈ M

m=0 tmTm(x) where tm are known via

  • rthogonality. The quality increases with M and becomes exact in the

limit M → ∞.

slide-21
SLIDE 21

Numerical results with Chebyshev approximation

2 4

E

1 2 3 4 5

ECheb

M

(a) M = 10

D = 2 D = 4 D = 8 D = 16 D = 32 D = 64 Random

0.5 0.0

ECheb

M

− E

2 4

E

(b) M = 20

0.5 0.0

ECheb

M

− E

slide-22
SLIDE 22

Estimating the negativity

The (unknown) relationship between PT moments and entanglement is inherently non-linear.

slide-23
SLIDE 23

Estimating the negativity

The (unknown) relationship between PT moments and entanglement is inherently non-linear. Universal approximation theorem A feed-forward neural network (even with a single hidden layer) can approximate continuous functions on compact subsets of Rn

Moment µ0 Moment µ1 Moment µ2 Moment µ3 Negativity Hidden layer Input layer Hidden layer Output layer

slide-24
SLIDE 24

Neural network training

Training is performed by generating a large set of random states for which µm and E can be computed on a classical computer. Random states used for training

  • (Haar) Random states: typically volume law entanglement
  • Random Matrix Product States: area law entanglement by

construction No prior knowledge of the underlying physics

slide-25
SLIDE 25

Neural network training

Training is performed by generating a large set of random states for which µm and E can be computed on a classical computer. Random states used for training

  • (Haar) Random states: typically volume law entanglement
  • Random Matrix Product States: area law entanglement by

construction No prior knowledge of the underlying physics

Numerical simulations implemented in Keras, using Hyperopt to optimize over the network structure (choice of hidden layers and activation functions)

slide-26
SLIDE 26

Numerical results

2 4

E

1 2 3 4 5

EML

M

(a) M = 3

D = 2 D = 4 D = 8 D = 16 D = 32 D = 64 Random

0.3 0.0 0.3

EML

M − E

2 4

E

(b) M = 10

0.3 0.0 0.3

EML

M − E

  • M = 3: two hidden layers, ReLU activation, 100 and 56 hidden neurons
  • M = 10: three hidden layers, ELU and ReLU activation, 61, 87 and 47

neurons

slide-27
SLIDE 27

Numerical results

1 2 3 4 5 Jt 0.0 0.5 NA = 1, NB = 1, NC = 3

(a)

E EML

M = 3

ECheb

M = 10

ECheb

M = 20

1 2 3 4 5 Jt 0.0 0.5 1.0 NA = 2, NB = 2, NC = 4

(b)

1 2 3 4 5 Jt 1 2 NA = 3, NB = 5, NC = 3

(c)

1 2 3 4 5 Jt 1 2 NA = 5, NB = 5, NC = 10

(d)

Quench dynamics |Ψ(t) =e−iHt |Ψ(0) where H = J

N−1

  • i=1

σi ·σi+1 |Ψ(0) = |↑↓↑ . . .

slide-28
SLIDE 28

Conclusions

  • Accurate estimation of entanglement negativity using a few PT

moments

  • Estimation with a polynomial number of measurements (compared

with the exponential complexity of full-tomography)

  • Two different experimental scheme to measure these moments using

nowadays technology

  • optical lattices
  • quantum dots
  • ...
  • Neural networks provide the most accurate and efficient estimation
  • few % errors with just three PT moments
  • J. Gray, L. Banchi, A. Bayat, S. Bose, arXiv: 1709.04923