SLIDE 3 How faithful, exactly?
If d is a distance measure, then define d(ρ, Sep) := min{d(ρ, σ) : σ ∈ Sep}. Trace distance? Everyone’s favorite distance measure is dtr(ρ, σ) = 1
2ρ − σ1 =
maximum ρ-vs-Sep bias of any measurement . ρ = 1 n
2
|i, j − |j, i √ 2 i, j| − j, i| √ 2 has d(ρ, Sep) ≥ 1/2, but Esq(ρ) ≤ const/n. [0910.4151] LOCC distance! Define dLOCC(ρ, σ) to be the maximum ρ-vs-Sep bias of any LOCC measurement . Now [1010.1750] proves Esq(ρ) ≥ 1 4 ln 2dLOCC(ρ, Sep)2.
Faithfulness and Monogamy go well together
Goal: optimize tr Mρ over ρA:B ∈ Sep for M a LOCC measurement Relaxation: Instead optimize over states ρAB1 that can be extended to ρAB1···Bk that is symmetric under permutation of B1, . . . , Bk. Claim: This gives error O(
k
). Proof: log dim A ≥ Esq(ρA:B1···Bk) boundedness ≥
k
Esq(ρA:Bi) monogamy = k · Esq(ρA:B1) symmetry ≥ k 1 4 ln 2dLOCC(ρA:B1, Sep)2 faithfulness Note: optimization can be performed in time exp
ǫ2
Additional definitions needed for proof
1 Relative entropy of entanglement:
ER(ρ) = minσ∈Sep S(ρσ) = minσ∈Sep tr ρ(log ρ − log σ).
2 Regularized relative entropy of entanglement:
E ∞
R (ρ) = limn→∞ 1 nER(ρ⊗n).
3 Hypothesis testing: We are given ρ⊗n or an arbitrary separable
- state. In the former case, we want to accept with probability
≥ 1/2; in the latter with probability ≤ 2−nD.
4 Rate function for hypothesis testing: If M is a class of
measurements (e.g. LOCC, LOCC→, ALL), then DM(ρ) is the largest D achievable above.
Hypothesis testing
Setting: We have n samples from classical distribution p or from q, and want to accept p and reject q. The test: We choose a test that depends only on p, and that is guaranteed to accept p with probability ≥ 0.99. More concretely: If our samples are i1, . . . , in and they have type t1, . . . , td then we demand that ti ≈ npi. The rate function: The probability of accepting q⊗n is ≈
np1,...,npd
d
i=1 qnpi i
≈ exp(−nD(pq)). Thus the rate function is D(pq). Example: Chernoff bound: Pinsker’s inequality states that D(pq) ≥ 1 2 ln 2p − q2
1.
Therefore, if we are sampling from q, the probability of observing a distribution p with p − q1 = δ is exp(−const nδ2).