coll ge de france abroad lectures quantum information
play

Collge de France abroad Lectures Quantum information with real or - PowerPoint PPT Presentation

Collge de France abroad Lectures Quantum information with real or artificial atoms and photons in cavities Lecture 3: Estimation and reconstruction of quantum states in Cavity QED experiments: Fock and Schrdinger cat states Introduction


  1. Collège de France abroad Lectures Quantum information with real or artificial atoms and photons in cavities Lecture 3: Estimation and reconstruction of quantum states in Cavity QED experiments: Fock and Schrödinger cat states

  2. Introduction Ideally, the direct reconstruction of a quantum state is straightforward: one accumulates statistics about the measurement of a complete set of observables, performed on a large number of realizations of the system’s state. The results fully constrains the system’s density operator, which is found by solving a set of equations equating the observed statistics with the theoretical ones. This ideal procedure may lead to difficulties. If the data are noisy, it may happen that the directly constrained density operator is found to be non- physical, e.g. with negative eigenvalues. The number of available copies of the system might be small, the data presenting then large fluctuations. Sometimes, the set of measured observables is incomplete, making it impossible to fully constrain the parameters defining the system’s state. In these realistic situations, state reconstruction is an estimation problem: how can we optimally guess the parameters defining the state from the information provided by incomplete measurements, performed on a finite set of copies and suffering limitations from noise? Inspired by classical estimation theory, recalled in &III-A, we will analyze the general method of maximum likelihood (&III-B) and maximum entropy (&III-C), before describing their application in Cavity QED (&III-D), illustrated by the reconstruction of Schrödinger cats and Fock states of a field (&III-E).

  3. III-A. Reminder about classical estimation theory

  4. Fisher Information & Cramer - Rao bound Measuring a random variable X (possibly a multi-component vector) yields a result x. The known probability law p(x| ! ) of X depends upon an unknown parameter ! , which can also be a vector. For reasons made clear below,the function p(x| ! ) is called the likelihood of ! corresponding to result x. The measurement of X brings an information about ! that we want to quantify. We call estimator ! (x) a function which associates to each x result an estimation of the true ! . Many estimator choices are possible. For a given one, different results x i generally yield different estimations ! (x i ). The variance of ! (x), averaged over measurements, defines the estimator precision. This variance has a lower limit independent of the estimator, called the Cramer-Rao bound. This limit is related to a function of the likelihood, called the Fisher Information. Let us briefly recall how these results are obtained. We consider here unbiased estimators, whose average over a large number of measurements yields the true value ! t of ! . If X is a continuous variable, this condition writes (all results can be extended to discrete variables by replacing integrals by sums): ( ) dx = 0 ! ( x ) " ! t = # p ( x | ! t ) ! ( x ) " ! t (4 " 1) Hence, by derivation ( ! is supposed here continuous and having a single component): ) # p ) # p ( ( p ( x | ! t ) dx = 0 = 1 $ $ $ ! ( x ) " ! t ! ( x ) " ! t (4 " 2) dx " % dx # ! t # ! t

  5. Cramer-Rao bound (cont’d) We then use the identity: = p ! Log ( p ) ! p (4 # 3) ! " ! " which leads to : ) p ! Log ( p ) ( dx = 1 $ " ( x ) # " t ! " t & ( p ! Log ( p ) ( ) & ( + dx = 1 $ " ( x ) # " t (4 # 4) % p * ' ) ! " t ' ) We then square the integral and use the Cauchy-Schwartz inequality: 2 " 2 dx 2 dx ! ! ! f ( x ) g ( x ) dx f ( x ) g ( x ) ' * 2 ( ) ' * & Log ( p ) 2 p ( x | $ t ) dx ( ) ) , ! ! # 1 " $ ( x ) % $ t p ( x | $ t ) dx (4 % 5) ) , ) , & $ t ( + ( +

  6. Cramer Rao bound (cont’d) We then introduce the variance of ! : 2 = 2 p ( x | " t ) dx ( ) $ " ( x ) # " t (4 # 6) ! " and we get the Cramer-Rao inequality: 1 2 # (4 $ 7) ! " I ( " t ) where we have defined the Fisher information (function of ! ): ( ) 2 # & " Logp x | ! ( ) = ) p ( x | ! ) dx (4 * 8) I ! % ( " ! $ ' I( ! ) is the expectation value of the square of the logarithmic derivative with respect to ! of the likelihood function (*). The larger I( ! ) is, the more the statistical law contains information allowing us to pin down ! and the smaller the variance bound is. An estimation is said to be optimal if its standard deviation (square root of variance) reaches the Cramer Rao bound, i.e. " ! = I -1/2 ( ! t ). (*)The Fisher information definition is generalized to situations where ! is a multicomponent vector by introducing a Fisher matrix involving the expectation values of second order partial derivatives of Logp(x| ! ) with respect to the ! components. This generalization is beyond the scope of this lecture.

  7. Additivity of Fisher information Performing N independent measurements of X yields results x i with the global probability: ( ) ( ) p ( x 1 ... x i ... x N | ! ) = = # $ p ( x i | ! ) " Log p ( x 1 ... x i ... x N | ! ) Log p ( x i | ! (4 % 9) i i Hence the Fisher information generated by the N independent measurements: ( ) 2 # & " Logp x 1 .. x i ... x N ( ) I N ( ! ) = ) p x 1 .. x i ... x N | ! dx 1 .. dx i ... dx N % ( " ! $ ' ( ) ( ) ( ) 2 # & # & # & " Logp x j " Logp x i " Logp x i = + * ) * ) ) p ( x i | ! ) dx i ( p ( x i | ! ) dx i % ( p ( x j | ! ) dx j ( (4 - 10) , % ( % % " ! " ! " ! $ ' $ ' $ ' i i + j $ ' ! L og p ( x ) ! p p ( x ) dx = dx = 0 # # sin ce & ) % ( ! " ! " which establishes the additivity of Fisher Information: I N ( ! ) = NI 1 ( ! ) (4 " 11) and the N-dependence of the optimal estimation standard deviation: # & 1 = ! 1 1 ! N = ! 1 = % ( ; (4 ) 12) ( ) % ( NI 1 ( " t ) N I 1 " t $ ' We retrieve the known result that the standard deviation of the estimation (error) decreases as the inverse of the square root of the number of measurements.

  8. Bayes law and Maximum Likelihood estimator A natural choice for the estimator ! (x) is justified by Bayes law, combined with an assumption of minimal knowledge about ! prior to measurements. The joint probability p(x, ! ) for finding values x and ! can be expressed in terms of the a priori probabilities p(x) and p( ! ) and the conditional probabilities p(x| ! ) and p( ! |x): ( ) p ! ( ) = p ! | x ( ) p x ( ) p ( x , ! ) = p x | ! ( ) p ! ( ) ( ) ) = p x | ! p x | ! ( ( ) = " p ! | x (4 $ 13) p ! ( ) p ! ( ) d ! # p ( x ) p x | ! If nothing is a priori known about ! , we assume a flat p( ! ) distribution, leading to: ( ) p x | ! ( ) = p ! | x (4 # 14) ( ) d ! " p x | ! The probability distribution of ! after result x has been found is thus given by the likelihood function p(x| ! ) normalised on ! . A natural estimator picks the value of ! which maximizes this probability distribution and hence the likelihood function p(x | ! ). We thus define the Maximum Likelihood estimator ! ML (x) (abbreviated as « ! Max Like ! ») by the implicit function: ( ) ( ) % ( solution of # p x | ! # Logp x | ! = 0 = 0 ! ML ( x ) (4 + 15) " $ ' * # ! # ! & ) ! = ! ML It can be shown that the Max Like estimator is optimal at the limit N # ∞.

  9. Simple illustration: a coin game Consider a heads or tails draw, X taking the bit values x= 0/ 1 with probabilities p and q=1-p, which we parametrize with an angle ! by defining p=cos 2 ( ! /2), q=sin 2 ( ! /2) with 0 $ ! < % . We get: ( ) ( ) 2 2 $ ' $ ' " # log p 0 | ! " # log p 1| ! p (0 | ! ) = cos 2 ! = tg 2 ! ; p (1| ! ) = sin 2 ! = cotg 2 ! (4 * 23) & ) & ) 2 2 2 2 # ! # ! % ( % ( and the Fisher information generated by a draw, found to be ! -independent: ( ) ( ) 2 2 # & # & I 1 ( ! ) = p (0 | ! ) " log p 0 | ! + p (1| ! ) " log p 1| ! = cos 2 ! tg 2 ! + sin 2 ! cotg 2 ! = 1 (4 ) 24) % ( % ( 2 2 2 2 " ! " ! $ ' $ ' The precision of an optimal estimation of ! is thus for N draws; 1 ( ) = I N = NI 1 = N (4 $ 25) ! " N # N We then deduce the standard deviation of p and q, and the standard deviation of X (whose expectation value is <X>=q): ( ) = ! N ( X ) = cos " sin " pq ( ) = ! N q ( ) = ! N p ! N " (4 # 26) 2 2 N This is a well known result. N draws yield ~ Np times 0 and ~Nq times 1 with a fluctuation & (Npq). The measurement of X, whose average is q, is performed with a precision & (Npq)/N = & (pq/N). These results apply to the two-element POVMs realizing the QND measurement of the photon number (see lecture 1).

  10. III-B. Estimation of a quantum state by the Maximum Likelihood principle

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend