pure state entanglement
play

Pure-state Entanglement d A d B | AB = c i , j | i | j . i - PowerPoint PPT Presentation

Pure-state Entanglement d A d B | AB = c i , j | i | j . i =1 j =1 Faithful squashed entanglement The SVD gives us orthonormal bases {| a i } i , {| b i } i and a prob. dist. { p i } such that p i | a i


  1. Pure-state Entanglement d A d B | ψ � AB = � � c i , j | i � ⊗ | j � . i =1 j =1 Faithful squashed entanglement The SVD gives us orthonormal bases {| a i �} i , {| b i �} i and a prob. dist. { p i } such that √ p i | a i � ⊗ | b i � . | ψ � AB = � Aram Harrow i group meeting The reduced states are ψ A = ψ B = Apr 28, 2011 � � p i | a i �� a i | and p i | b i �� b i | . i i The entropy of entanglement of | ψ � is p i log 1 E ( ψ ) := S ( ψ A ) = − tr ψ A log ψ A = � . p i i An analogy Entropy of entanglement Let E ( ψ ) = S ( ψ A ). Then LOCC (local operations and classical communication) can on average only decrease E ( ψ ). Mixed-state entanglement E ( ψ 1 ⊗ ψ 2 ) = E ( ψ 1 ) + E ( ψ 2 ). E ( ψ ) > 0 if and only if ψ is entangled. Pandora’s Box The fact that pure-state Using LOCC we can asymptotically and approximately convert entanglement can be de- Hope ψ ⊗ n ↔ Φ ⊗ nE ( ψ ) scribed by the entropy of 2 entanglement. where | Φ 2 � = | 0 , 0 � + | 1 , 1 � . √ 2

  2. Mixed-state measures of entanglement Squashed entanglement Entanglement cost: Introduced by Matthias Christandl and Andreas Winter in quant-ph/0308088. →≈ ρ ⊗ n for sufficiently large n } . E { Φ nE E c ( ρ ) := inf 2 Definition (squashed entanglement) � 1 � Distillable entanglement: 2 I ( A ; B | E ) σ : tr E σ ABE = ρ E sq ( ρ A : B ) = inf { ρ ⊗ n →≈ Φ nE D ( ρ ) := sup for sufficiently large n } . 2 E Definition (conditional mutual information) I ( A ; B | E ) = S ( A | E ) + S ( B | E ) − S ( AB | E ) Neither is known to be computable in finite time. = S ( AE ) + S ( BE ) − S ( ABE ) − S ( E ) Bound-entangled states exist with D ( ρ ) = 0 and E c ( ρ ) > 0. ≥ 0 (by strong subadditivity) Since 2005, we at least know that E c ( ρ ) = 0 iff ρ ∈ Sep: Why 1/2? Because if ρ = | ψ �� ψ | , then I ( A ; B ) ρ = S ( ψ A ) + S ( ψ B ) − S ( ψ AB ) = 2 S ( ψ A ). Sep = conv { α ⊗ β : α, β are density matrices } Squashed examples Properties of E sq Desirable Example: pure entangled state Additive: E sq ( ρ 1 ⊗ ρ 2 ) = E sq ( ρ 1 ) + E sq ( ρ 2 ) ρ 1 = | 00 � + | 11 � � 00 | + � 11 | √ √ Monogamous: E sq ( ρ A : B 1 B 2 ) ≥ E sq ( ρ A : B 1 ) + E sq ( ρ A : B 2 ) 2 2 Bounded: E sq ( ρ A : B ) ≤ log dim A . Any extension σ is of the form σ = ρ AB ⊗ ω E . 1 I ( A ; B | E ) = 0 implies “quantum Markov property”, which ∴ E sq ( ρ 1 ) = 1. implies membership in Sep. LOCC monotone, asymptotically continuous. Example: correlated state Less desirable ρ 2 = | 00 �� 00 | + | 11 �� 11 | Not known to be computable (optimization over extensions 2 may involve unbounded dimension). Choose σ = ( | 000 �� 000 | + | 111 �� 111 | ) / 2 to obtain E sq ( ρ 2 ) = 0. Not known to be faithful: may have E sq ( ρ ) = 0 for some (Note: σ = ( | 000 � + | 111 � )( � 000 | + | 111 � ) / 2 doesn’t work.) ρ �∈ Sep.

  3. How faithful, exactly? Faithfulness and Monogamy go well together If d is a distance measure, then define Goal: optimize tr M ρ over ρ A : B ∈ Sep for M a LOCC measurement d ( ρ, Sep) := min { d ( ρ, σ ) : σ ∈ Sep } . Relaxation: Instead optimize over states ρ AB 1 that can be extended to ρ AB 1 ··· B k that is symmetric under permutation of B 1 , . . . , B k . Trace distance? � log dim A Everyone’s favorite distance measure is d tr ( ρ, σ ) = 1 2 � ρ − σ � 1 = Claim: This gives error O ( ). k maximum ρ -vs- Sep bias of any measurement . Proof: 1 | i , j � − | j , i � � i , j | − � j , i | � log dim A ≥ E sq ( ρ A : B 1 ··· B k ) √ √ boundedness ρ = � n � 2 2 2 1 ≤ i < j ≤ n k � E sq ( ρ A : B i ) ≥ monogamy has d ( ρ, Sep) ≥ 1 / 2, but E sq ( ρ ) ≤ const / n . [0910.4151] i =1 = k · E sq ( ρ A : B 1 ) symmetry LOCC distance! 1 4 ln 2 d LOCC ( ρ A : B 1 , Sep) 2 Define d LOCC ( ρ, σ ) to be the maximum ρ -vs- Sep bias of any LOCC ≥ k faithfulness measurement . Now [1010.1750] proves 1 const log 2 dim A � � 4 ln 2 d LOCC ( ρ, Sep) 2 . E sq ( ρ ) ≥ Note: optimization can be performed in time exp . ǫ 2 Additional definitions needed for proof Hypothesis testing Setting: We have n samples from classical distribution p or from q , 1 Relative entropy of entanglement: and want to accept p and reject q . E R ( ρ ) = min σ ∈ Sep S ( ρ � σ ) = min σ ∈ Sep tr ρ (log ρ − log σ ). The test: We choose a test that depends only on p , and that is guaranteed to accept p with probability ≥ 0 . 99. 2 Regularized relative entropy of entanglement: More concretely: If our samples are i 1 , . . . , i n and they have type E ∞ R ( ρ ) = lim n →∞ 1 n E R ( ρ ⊗ n ). t 1 , . . . , t d then we demand that t i ≈ np i . The rate function: The probability of accepting q ⊗ n is 3 Hypothesis testing: We are given ρ ⊗ n or an arbitrary separable � n � � d i =1 q np i ≈ ≈ exp( − nD ( p � q )). Thus the rate function np 1 ,..., np d i state. In the former case, we want to accept with probability is D ( p � q ). ≥ 1 / 2; in the latter with probability ≤ 2 − nD . Example: Chernoff bound: Pinsker’s inequality states that 4 Rate function for hypothesis testing: If M is a class of 1 2 ln 2 � p − q � 2 D ( p � q ) ≥ 1 . measurements (e.g. LOCC, LOCC → , ALL), then D M ( ρ ) is the largest D achievable above. Therefore, if we are sampling from q , the probability of observing a distribution p with � p − q � 1 = δ is exp( − const n δ 2 ).

  4. Quantum Hypothesis testing Proof outline Quantum relative entropy: For states ρ, , σ define S ( ρ � σ ) := tr ρ (log ρ − log σ ) . 1 I ( A ; B | E ) ρ ≥ E ∞ R ( ρ A : BE ) − E ∞ R ( ρ A : E ) 2 E ∞ R ( ρ A : BE ) − E ∞ R ( ρ A : E ) ≥ D LOCC ← ( ρ A : B ) Still equals optimal rate function: For any ρ , we can design a test that accepts ρ ⊗ n with probability ≥ 0 . 99 and for any σ , accepts σ ⊗ n with probability � exp( − nS ( ρ � σ )). And this is optimal. 3 If M is a class of measurements that preserves Sep, then Distinguishing against convex sets: If K is a convex set, then our 1 D M ( ρ A : B ) ≥ M ∈M d M ( ρ, σ ) 2 2 ln 2 min σ ∈ Sep max rate function for distinguishing ρ against a collection of arbitrary σ ∈ S is σ ∈ K S ( ρ � σ ) . min Application to entanglement testing: E ∞ R ( ρ ) = D ALL ( ρ ). 1. Proof that 2. Proof that I ( A ; B | E ) ρ ≥ E ∞ R ( ρ A : BE ) − E ∞ R ( ρ A : E ) E ∞ R ( ρ A : BE ) ≥ E ∞ R ( ρ A : E ) + D LOCC ← ( ρ A : B ) Lemma [ H 3 O ]: E ∞ R is not lockable; i.e. tracing out Q qubits Lemma: E ∞ R ( ρ ) = D ALL ( ρ ). decreases E R by at most 2 Q . Lemma [State redistribution; Yard-Devetak]: B can be sent from A Proof: Construct an optimal unrestricted measurement M 1 to E using 1 2 I ( A ; B | E ) qubits (cf. Anup’s work). distinguishing ρ A : E from Sep and an optimal LOCC ← measurement M 2 distinguishing ρ A : B from Sep. Apply M 2 and then M 1 . By gentle measurement, we are still likely to accept ρ . And since M 2 Proof: Purify ρ ABE to get ψ ABEE ′ . Tracing out B means sending it doesn’t create entanglement between A : E , passing M 2 doesn’t to E ′ , which requires 1 2 I ( A ; B | E ′ ) qubits, which reduces E ∞ R by at increase the probability of passing M 1 . most I ( A ; B | E ′ ) = I ( A ; B | E ).

  5. 3. Proof that Implications D M ( ρ A : B ) ≥ 2 ln 2 min σ ∈ Sep max M ∈M d M ( ρ, σ ) 2 1 Let M be an LOCC measurement on k systems, each with n qubits. Estimating the optimal acceptance probability on product-state inputs is complete for QMA LOCC ( k ). This class n Von Neumann’s minimax theorem means that can be reduced to QMA n 2 k /ǫ 2 (1) by introducing error ǫ . The class QMA SEP δ := min σ ∈ Sep max M ∈M d M ( ρ, σ ) = max M ∈M min σ ∈ Sep d M ( ρ, σ ) √ n (2) (with constant error) contains 3-SAT. [1001.0017] If this result could be improved to apply to QMA LOCC Thus, there exists a separable M with √ n (2) or the simulation were improved to apply to SEP measurements, tr M ρ = p then we would have a matching bound. tr M σ ≤ p − δ ∀ σ ∈ Sep This problem is useful not only for simulating multiple unentangled Merlins, but for calculating tensor norms, such as Since M is separable, it preserves Sep. Thus, applying M n times will distinguish ρ ⊗ n from Sep with � � n � n � n � k =1 A i , j , k x i y j z k � i =1 j =1 � error ≤ exp( − const · n δ 2 ). � A � tensor := max � � � . � x � � y � � z � x , y , z ∈ C d � � � Details at MSR.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend