measuring entanglement negativity with neural network
play

Measuring Entanglement Negativity with Neural Network Estimators - PowerPoint PPT Presentation

Measuring Entanglement Negativity with Neural Network Estimators Johnnie Gray , Leonardo Banchi , , Abolfazl Bayat , Sougato Bose 6 Nov 2017 Imperial College, London, UK University College London, London, UK Machine


  1. Measuring Entanglement Negativity with Neural Network Estimators Johnnie Gray † , Leonardo Banchi ∗ , † , Abolfazl Bayat † , Sougato Bose † 6 Nov 2017 ∗ Imperial College, London, UK † University College London, London, UK

  2. Machine Learning Techniques in Quantum...

  3. Quick summary Entanglement negativity is exceptional amongst entanglement measures • Once the state is known, it is “easy” to compute Entanglement negativity is difficult to measure experimentally • it requires full state tomography — the number of measurements grows exponentially with system size

  4. Quick summary Entanglement negativity is exceptional amongst entanglement measures • Once the state is known, it is “easy” to compute Entanglement negativity is difficult to measure experimentally • it requires full state tomography — the number of measurements grows exponentially with system size We show that neural networks can be trained to accurately estimate the entanglement negativity with a polynomial number of measurements , using a few copies of the original system

  5. Entanglement • Entanglement is a form of quantum correlation with no classical analogue • Entanglement is responsible for the huge dimensionality of the quantum state space • It is the resource for quantum information protocols • But... it is very fragile!

  6. Entanglement • Entanglement is a form of quantum correlation with no classical analogue • Entanglement is responsible for the huge dimensionality of the quantum state space • It is the resource for quantum information protocols • But... it is very fragile! How much entanglement do we have in our system?

  7. Entanglement | ψ ABC � pure B ρ AB mixed A C

  8. Definitions Logarithmic negativity for a generic mixed state ρ AB quantifies the quantum entanglement between subsystems A and B . � � � � � � ρ T A � ρ T B E = log 2 � = log 2 � = log 2 | λ k | � � � � AB AB k where { λ k } are the eigenvalues of ρ T X AB B A C

  9. Eigenvalues of partially transposed density matrices • The { λ k } are the roots of the characteristic polynomial, � � λ − ρ T B n c n λ n , where each coefficient c n is a P ( λ )= det = � AB polynomial function of the partially transposed (PT) moments : � AB ) m � ( ρ T B � λ m µ m = Tr = k , k Full information about the spectrum { λ k } is contained in { µ m } . • Since − 1 2 ≤ λ k ≤ 1 the magnitude of the moments quickly decreases with m .

  10. Eigenvalues of partially transposed density matrices • The { λ k } are the roots of the characteristic polynomial, � � λ − ρ T B n c n λ n , where each coefficient c n is a P ( λ )= det = � AB polynomial function of the partially transposed (PT) moments : � AB ) m � ( ρ T B � λ m µ m = Tr = k , k Full information about the spectrum { λ k } is contained in { µ m } . • Since − 1 2 ≤ λ k ≤ 1 the magnitude of the moments quickly decreases with m . The first few moments carry the most significance, but... the relationship between moments and spectrum is unknown

  11. Schematics of the protocol �� m � � � AB ) m � ( ρ T B � ( P m ) T B µ m = Tr = Tr ρ A c B c c =1 P m is any linear combination of cyclic permutation operators of order m .

  12. Schematics of the protocol �� m � � � AB ) m � ( ρ T B � ( P m ) T B µ m = Tr = Tr ρ A c B c c =1 P m is any linear combination of cyclic permutation operators of order m . (a) (b) A B C 2 1 1 2 (a) Example set-up for the measurement of the moments ( m = 3) (b) Equivalence between the moments µ m and expectation of two opposite permutations on A and B .

  13. Measuring the moments in spin lattices (a) (b) A B C 2 1 1 2 (i) Prepare m copies of the state ρ AB ;

  14. Measuring the moments in spin lattices (a) (b) A B C 2 1 1 2 (i) Prepare m copies of the state ρ AB ; (ii) Sequentially measure a ‘forward’ sequence of adjacent swaps, S c , c +1 A between neighbouring copies of system A from c = 1 to m − 1;

  15. Measuring the moments in spin lattices (a) (b) A B C 2 1 1 2 (i) Prepare m copies of the state ρ AB ; (ii) Sequentially measure a ‘forward’ sequence of adjacent swaps, S c , c +1 A between neighbouring copies of system A from c = 1 to m − 1; (iii) Sequentially measure a ‘backward’ sequence of adjacent swaps, S c , c − 1 between neighbouring copies of system B from c = m to 2; B

  16. Measuring the moments in spin lattices (a) (b) A B C 2 1 1 2 (i) Prepare m copies of the state ρ AB ; (ii) Sequentially measure a ‘forward’ sequence of adjacent swaps, S c , c +1 A between neighbouring copies of system A from c = 1 to m − 1; (iii) Sequentially measure a ‘backward’ sequence of adjacent swaps, S c , c − 1 between neighbouring copies of system B from c = m to 2; B (iv) Repeat these steps in order to yield an expectation value.

  17. Measuring the moments in bosonic lattices (i) Prepare m copies of the state ρ AB ; (ii) Perform ‘forward’ Fourier transforms between modes in different copies for each site in A – this can be achieved using a series of beam splitters; (iii) Perform ‘backwards’ (reverse) Fourier transform between modes in different copies for each site in B , via reverse beam splitter transformations; (iv) Measure the boson occupation numbers n j , c on all sites j ∈ { A , B } and all copies c to compute φ = e i � j ∈{ A , B } , c 2 π cn j , c / m . (v) Repeat these steps to obtain the expectation value µ m as an average of φ .

  18. Estimating the negativity Problem OK, we can measure the moments µ m using a polynomial number of measurements O [ m ( N A + N B )]. But how do we accurately estimate the negativity?

  19. Estimating the negativity Problem OK, we can measure the moments µ m using a polynomial number of measurements O [ m ( N A + N B )]. But how do we accurately estimate the negativity? Analytical theory : Chebyshev functional approximation E = log 2 Tr f ( ρ T B AB ) f ( x ) = | x | Idea: if we can find a polynomial expansion f ( x ) ≈ � M m =0 α m x m , then M � E = log 2 α m µ m m =0

  20. Estimating the negativity Problem OK, we can measure the moments µ m using a polynomial number of measurements O [ m ( N A + N B )]. But how do we accurately estimate the negativity? Analytical theory : Chebyshev functional approximation E = log 2 Tr f ( ρ T B AB ) f ( x ) = | x | Idea: if we can find a polynomial expansion f ( x ) ≈ � M m =0 α m x m , then M � E = log 2 α m µ m m =0 Chebyshev expansion: f ( x ) ≈ � M m =0 t m T m ( x ) where t m are known via orthogonality. The quality increases with M and becomes exact in the limit M → ∞ .

  21. Numerical results with Chebyshev approximation Random D = 2 D = 4 D = 8 D = 16 D = 32 D = 64 5 (a) M = 10 (b) M = 20 4 3 E Cheb M 2 E Cheb E Cheb − E − E M M 1 0 0.5 0.0 0.5 0.0 0 2 4 0 2 4 E E

  22. Estimating the negativity The (unknown) relationship between PT moments and entanglement is inherently non-linear.

  23. Estimating the negativity The (unknown) relationship between PT moments and entanglement is inherently non-linear. Universal approximation theorem A feed-forward neural network (even with a single hidden layer) can approximate continuous functions on compact subsets of R n Input Hidden Hidden Output layer layer layer layer Moment µ 0 Moment µ 1 Negativity Moment µ 2 Moment µ 3

  24. Neural network training Training is performed by generating a large set of random states for which µ m and E can be computed on a classical computer. Random states used for training • (Haar) Random states: typically volume law entanglement • Random Matrix Product States: area law entanglement by construction No prior knowledge of the underlying physics

  25. Neural network training Training is performed by generating a large set of random states for which µ m and E can be computed on a classical computer. Random states used for training • (Haar) Random states: typically volume law entanglement • Random Matrix Product States: area law entanglement by construction No prior knowledge of the underlying physics Numerical simulations implemented in Keras, using Hyperopt to optimize over the network structure (choice of hidden layers and activation functions)

  26. Numerical results D = 2 D = 4 D = 8 D = 16 D = 32 D = 64 Random 5 (a) M = 3 (b) M = 10 4 3 E ML M E ML E ML 2 M − E M − E 1 0 0.3 0.0 0.3 0.3 0.0 0.3 0 2 4 0 2 4 E E • M = 3: two hidden layers, ReLU activation, 100 and 56 hidden neurons • M = 10: three hidden layers, ELU and ReLU activation, 61, 87 and 47 neurons

  27. Numerical results E ML E Cheb E Cheb E M = 3 M = 10 M = 20 (a) N A = 1 , N B = 1 , N C = 3 0.5 0.0 Quench dynamics 0 1 2 3 4 5 (b) N A = 2 , N B = 2 , N C = 4 1.0 Jt | Ψ( t ) � = e − iHt | Ψ(0) � 0.5 where 0.0 0 1 2 3 4 5 (c) N A = 3 , N B = 5 , N C = 3 2 Jt N − 1 � H = J σ i · σ i +1 1 i =1 0 0 1 2 3 4 5 (d) | Ψ(0) � = |↑↓↑ . . . � N A = 5 , N B = 5 , N C = 10 2 Jt 1 0 0 1 2 3 4 5 Jt

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend