quantum state discrimination with the pretty good
play

Quantum state discrimination with the Pretty good Measurement - PowerPoint PPT Presentation

Quantum state discrimination with the Pretty good Measurement Proudly presented by Jonathan Lavoie Theory of Quantum Communication 27 July 2010 Problem statement : discrimination of quantum states | k = - |s k The PGM


  1. Quantum state discrimination with the “Pretty good Measurement” Proudly presented by Jonathan Lavoie Theory of Quantum Communication 27 July 2010

  2. Problem statement : discrimination of quantum states |  k  =  -½ |s k  The PGM as a decoding observable Derivation of an upper bound P E  2/ N  ( ( 1 – n i ) ) + ½   i |  j  2 to the probability of error j  i i

  3. Discrimination of quantum state Fundamental to the theory of quantum communication The question is: How can we best discriminate between a known set of states |  i  , each having been prepared with a known probability p i ? A general measurement (POVM) is generally the best approach However, the “optimality of a measurement” is relative to the problem Unambiguous state Minimum-error Optimization of discrimination discrimination Mutual information Allow our measurement Exists a necessary and Has only a necessary sufficient condition condition to have inconclusive results If not inconclusive, it is In general, one does not apply the other! always correct! (i.g. Tetrahedron states of Assignment 2)

  4. The pretty good measurement has desirable properties It has a second name!: “The square - root measurement”  Its construction is simple  just need to know the ensemble of signal states  Minimizes the probability of detection error for symmetric states of the form |  i  = U|  i-1  = U i |  0  , i = 0, ..., N -1, and U N = Identity (M. Ban et al , International Journal of Theoretical Physics, Vol. 36, No. 6, 1997)  It is “pretty good” to distinguish almost orthogonal states, and equally likely (Hausladen & Wootters, Journal of modern optics, 1994, Vol. 41, No. 12, 2385-2390)  It is asymptotically optimal (hopefully what I can proof today...)  Optimal for the Hidden Subgroup Problem (arxiv: quant-ph/0501044)

  5. The problem: (Alice) Ensemble of N code words { |s i  }, each used with equal frequency and N consist of l letters, s.t. |s i  = |a 1 a 2 a 3 ... a l  Bob constructs a general measurement (POVM) to deduce the message with the minimum probability of error

  6. Bob constructs a general measurement to distinguish the states (PGM) Signal states Measurement vectors given by: |  k  =  -½ |s k  with corresponding positive operator |  k  k | and where  =  |s k  s k | k Legitimate POVM since  |  k  k | =  -½  |s k  s k |  -½ =  -½   -½ = 1 Other properties of the measurement vectors: Construct the Gram matrix S jk =  s j ||s k  (Hermitian N x N matrix, with positive eigenvalues) Then |  k  vectors are related to the square-root of that matrix (  S) jk =  j ||s k 

  7. Given the ensemble of input signals, we derive the average probability of error, using the PGM Alice sends signal |s i  with probability 1/ N Probability that Bob has correct outcome is given by P(  i | s i ) = tr ( |  i  i ||s i  s i | ) = |  i ||s i  | 2 The average probability of error is therefore P E = 1 – 1/ N  |  i ||s i  | 2 find upper bound i = 1/ N  ( 1 -  i ||s i  ) (1 +  i ||s i  ) i  2/ N  ( 1 -  i ||s i  ) My interpretation... (1-b) (1+b) = 1 – b 2  2 – 2b  2b  1 + b 2 P E  2/ N  ( 1 - (  S) ii ) In term of the Gram matrix 1 + b 2 (  S) jk =  j ||s k  2b We can simplify a bit more ...

  8. The square root function is bounded P E  2/ N  ( 1 - (  S) ii ) bellow by a parabola Gram S jk =  s j ||s k   X  3/2 X – ½ X 2 This inequality can be applied to the Gram matrix S !  X  S  3/2 S – ½ S 2 3/2 X – ½ X 2 WHY? The matrix S is Hermitian with non-negative eigenvalues, therefore diagonalizable ½ S 2 +  S - 3/2 S  0 S = UDU † The inequality holds if ½ UD 2 U † +U  DU † - 3/2 UDU †  0 i.e. U [ ½ D 2 +  D - 3/2 D ] U †  0 ... ½ d 2 +  d 1 - 3/2 d 1 0 1 All eigenvalues positive! 0 ...

  9. Almost there ... P E  2/ N  ( 1 - (  S) ii ) M  0 M  N + P Gram S jk =  s j ||s k   |M|   0 V  M – N – P  0 V   |M – N - P|   0  S  3/2 S – ½ S 2  |M|  -  |N|  -  |P|   0  |M|    |N|  +  |P|  Therefore means that, for a complex vector |N  ,  S  3/2 S – ½ S 2 with components z k  N|  S|N   3/2  N|S|N  – ½  N|S 2 |N  Which is the same as  z k (  S) kl z l  3/2  z k S kl z l – ½  z k S kj S jl z l * * * k,l k,l k,l,j Very general, but for given i, we can choose z i = 1 and z k = 0 for k  i. This yields (  S) ii  3/2 S ii – ½  S ij S ji j and using S ii = n i = 3/2 n i – ½n i - ½  S ij S ji 2 j  i

  10. Finally ... The upper bound on decoding errors Given that P E  2/ N  ( 1 - (  S) ii ) and (  S) ii  3/2 n i – ½n i - ½  S ij S ji 2 j  i then P E  2/ N  ( 1 - 3/2 n i + ½n i + ½  S ij S ji 2 ) j  i i = 2/ N  ( ( 1 – n i ) ( 1 – n i /2) ) + ½  S ij S ji j  i i P E  2/ N  ( ( 1 – n i ) ) + ½  S ij S ji j  i i with S ii = n i P E  2/ N  ( ( 1 – n i ) ) + ½  |  s i |s j  | 2 |s i  = signal states j  i i

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend