a combinatorial application of quantum information in
play

A combinatorial application of quantum information in percolation - PowerPoint PPT Presentation

1 A combinatorial application of quantum information in percolation theory Nicolas Delfosse - Universit de Sherbrooke joint work with Gilles Zmor - Universit de Bordeaux http://arxiv.org/abs/1408.4031 QEC14 - December 19, 2014 From


  1. 1 A combinatorial application of quantum information in percolation theory Nicolas Delfosse - Université de Sherbrooke joint work with Gilles Zémor - Université de Bordeaux http://arxiv.org/abs/1408.4031 QEC14 - December 19, 2014

  2. From percolation to topological codes 2 ◮ Quantum erasure channel: Each qubit is erased (lost) with probability p , independently. ◮ Relation with percolation: For Kitaev’s toric code, correction of erasures is related with a statistical mechanical model called percolation. ◮ Application: Apply results from percolation theory to surface codes. (Stace, Barrett Doherty - 2009)

  3. From percolation to topological codes 2 ◮ Quantum erasure channel: Each qubit is erased (lost) with probability p , independently. ◮ Relation with percolation: For Kitaev’s toric code, correction of erasures is related with a statistical mechanical model called percolation. ◮ Application: Apply results from percolation theory to surface codes. (Stace, Barrett Doherty - 2009) Goal: derive results in percolation from quantum information.

  4. Overview 3 Percolation Theory From percolation to quantum error correction Three bounds on the threshold ◮ no-cloning bound ◮ LDPC codes bound ◮ homological bound

  5. Why percolation? 4 The melting of ice is a phase transition at the critical point T = 0 ◦ C : There is a discontinuous evolution of macroscopic properties of water. Question: How do local interactions between particles induce a global behaviour? Why percolation? It is perhaps the simplest model which exhibits a phase transition.

  6. Percolation in Z 2 5 Each edge is red, independantly with probabily p . Question: is there an infinite red component ?

  7. Percolation in Z 2 6 There is a phase transition at p c : ◮ if p < p c , there is an infinite red component with proba 0, ◮ if p > p c , there is an infinite red component with proba 1. Goal: Determine the value of p c . Theorem (H. Kesten, 1980 - conjectured 20 years before) In the square lattice we have: p c = 1 / 2 .

  8. Percolation in hyperbolic lattices 7 Let G ( m ) be the m -regular planar tiling. ◮ The exact value of p c is unknown. ◮ The numerical estimation of p c is difficult. (Benjamini, Schramm, and later Baek, Kim, Minnhagen and Gu, Ziff) We will use quantum information theory to bound p c .

  9. 8 From percolation to topological codes

  10. Kitaev’s toric codes (Kitaev - 1997) 9 ◮ Place a qubit on each edge of a torus. ◮ This gives a global state | ψ � ∈ ( C 2 ) ⊗ n with n = | E | . X site operator X v = X X X Z face operator Z f = Z Z Z The toric code is the ground space of � � H = − X v − Z f v f

  11. A problematic erasure 10 Each qubit is erased (lost), independently, with probability p . Correctable ⇔ erased clusters are planar ⇔ do not cover homology

  12. A problematic erasure 10 Each qubit is erased (lost), independently, with probability p . Correctable ⇔ erased clusters are planar ⇔ do not cover homology

  13. From percolation to toric codes 11 For large tilings, we have: Uncorrectable erasures ≈ Infinite clusters in percolation Threshold for percolation in Z 2 = ⇒ Threshold for toric codes: ◮ p < p c ⇒ the toric code has a good performance (Stace, Barrett Doherty - 2009)

  14. Construction of hyperbolic codes 12 First step: Relate hyperbolic percolation to topological codes. Using finite versions of G ( m ) , we can define hyperbolic codes: (Freedman, Meyer, Luo - 2001, Zémor 2009) Place a qubit on each edge, then ◮ Plaquette operators X v correspond to the edges incident to a vertex ◮ Site operators Z f correspond to faces. The hyperbolic code is the ground space of � � H = − X v − Z f . v f

  15. A finite hyperbolic tiling of genus 5 13 8 11 14 3 13 4 12 7 15 12 1 13 14 8 2 11 4 1 9 0 0 8 6 10 10 5 11 7 1 12 3 12 15 6 9 8 11 10 14 15 10 5 2 6 13 13 4 12 4 7

  16. From percolation to hyperbolic codes 14 We use quotients of G ( m ) (Proposed by Siran ’01) such that ◮ G r ( m ) is a finite graph ◮ G r ( m ) locally looks like G ( m ) (balls of radius r are planar) Then, for large r , we have: p < p c ( G ( m )) ⇒ hyperbolic codes have a good performance.

  17. 15 Application to percolation ◮ No-cloning bound

  18. Capacity of the quantum erasure channel 16 k qubits n qubits n qubits k qubits m x Encoding Decoding Channel x’ m’ What is the highest rate R = k/n with P err → 0 ? − → It is the capacity of the channel. Theorem (Bennet, DiVicenzo, Smolin - 97) The capacity of the quantum erasure channel is 1 − 2 p . Derived from the no-cloning theorem.

  19. A no-cloning upper bound in percolation 17 Main argument: if p < p c then R = 1 − 4 m ≤ 1 − 2 p Theorem (D., Zémor - ITW 10) The critical probability on the graph G ( m ) satisfies: p c ≤ 2 m. Easy combinatorial bounds: 1 1 m − 1 ≤ p c ≤ 1 − m − 1 .

  20. 18 Application to percolation ◮ No-cloning bound ◮ LDPC bound

  21. Improving the no-cloning bound 19 ◮ The no-cloning bound is tight only if hyperbolic codes achieve capacity. ◮ Hyperbolic quantum codes are defined by bounded weight generators. (LDPC). ◮ Classical intuition: Classical LDPC codes cannot achieve the capacity. Difficulty: the no-cloning bound is not related with the codes.

  22. A combinatorial bound 20   I X Z Y Z H = Z Z X I Z   I Y Y Y Z � � E = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ E

  23. A combinatorial bound 20   I X Z Y Z H E = Z Z X I Z   I Y Y Y Z � � E = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ E ◮ There are 2 2 syndromes of errors E ⊂ E

  24. A combinatorial bound 20   I X Z Y Z E = Z Z X I Z H ¯   I Y Y Y Z � � E = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ E ◮ There are 2 2 syndromes of errors E ⊂ E ◮ There are 2 equivalent errors included in E in each coset mod the S .

  25. A combinatorial bound 20   I X Z Y Z E = Z Z X I Z H ¯   I Y Y Y Z � � E = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ E ◮ There are 2 2 syndromes of errors E ⊂ E ◮ There are 2 equivalent errors included in E in each coset mod the S . − → E can not be corrected

  26. A combinatorial bound 20   I X Z Y Z E = Z Z X I Z H ¯   I Y Y Y Z � � E = 0 1 1 0 0 ◮ There are 4 2 errors E ⊂ E ◮ There are 2 2 syndromes of errors E ⊂ E ◮ There are 2 equivalent errors included in E in each coset mod the S . − → E can not be corrected Lemma E − rank H E ) errors E ⊂ E . We can correct 2 rank H − (rank H ¯

  27. Combinatorial version of the no-cloning bound 21 Let ( H t ) be a sequence of stabilizer matrices of codes of rate R . Theorem (D., Zémor - QIC 2013) If P err → 0 then R ≤ 1 − 2 p − D ( p ) , where E p [rank H t, ¯ E − rank H t, E ] D ( p ) = lim sup · n t t Corollary: When p ≤ 1 / 2 , we have R ≤ 1 − 2 p . Remark: With hyperbolic codes, the matrices H t are sparse.

  28. Rank of a random sparse matrix 22       H E       � �� � pn columns ◮ Typically: H E is a r × np matrix

  29. Rank of a random sparse matrix 22       H E       � �� � pn columns ◮ Typically: H E is a r × np matrix

  30. Rank of a random sparse matrix 22       H E       � �� � pn columns ◮ Typically: H E is a r × np matrix

  31. Rank of a random sparse matrix 22       H E       � �� � pn columns ◮ Typically: H E is a r × np matrix

  32. Rank of a random sparse matrix 22       H E       � �� � pn columns ◮ Typically: H E is a r × np matrix ◮ When np = r , the square matrix H E has almost full rank − → D ( p ) is close 0.

  33. Rank of a random sparse matrix 22   Z X Z     H E       � �� � pn columns ◮ Typically: H E is a r × np matrix ◮ When np = r , the square matrix H E has almost full rank − → D ( p ) is close 0.

  34. Rank of a random sparse matrix 22   Z X Z     H E       � �� � pn columns ◮ Typically: H E is a r × np matrix ◮ When np = r , the square matrix H E has almost full rank − → D ( p ) is close 0. ◮ BUT for a sparse matrix H , there are αn null rows in H E − → Bound on D ( p ) .

  35. Rank of a random sparse matrix 22   Z X Z     H E     Z Y X   � �� � pn columns ◮ Typically: H E is a r × np matrix ◮ When np = r , the square matrix H E has almost full rank − → D ( p ) is close 0. ◮ BUT for a sparse matrix H , there are αn null rows in H E − → Bound on D ( p ) .

  36. Rank of a random sparse matrix 22   Z X Z     H E     Z Y X   � �� � pn columns ◮ Typically: H E is a r × np matrix ◮ When np = r , the square matrix H E has almost full rank − → D ( p ) is close 0. ◮ BUT for a sparse matrix H , there are αn null rows in H E − → Bound on D ( p ) . ◮ Similarly, there are βn identical rows of weight 1 ... − → more accurate bound.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend