Bounds on achievable rates of sparse quantum codes over the quantum - - PowerPoint PPT Presentation

bounds on achievable rates of sparse quantum codes over
SMART_READER_LITE
LIVE PREVIEW

Bounds on achievable rates of sparse quantum codes over the quantum - - PowerPoint PPT Presentation

Bounds on achievable rates of sparse quantum codes over the quantum erasure channel Nicolas Delfosse (with Gilles Z emor) Institute of Mathematics - Univ. of Bordeaux - France Second Int. Conf. on Quantum Error Correction - QEC 11 USC Los


slide-1
SLIDE 1

Bounds on achievable rates of sparse quantum codes over the quantum erasure channel

Nicolas Delfosse (with Gilles Z´ emor)

Institute of Mathematics - Univ. of Bordeaux - France

Second Int. Conf. on Quantum Error Correction - QEC 11 USC Los Angeles - December 5, 2011

slide-2
SLIDE 2

Capacity of a classical channel

x Channel x’

◮ The channel introduces errors

  • 1C. Shannon - A mathematical theory of communication. The Bell System

Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

slide-3
SLIDE 3

Capacity of a classical channel

x Channel x’ m Encoding Decoding m’ k bits n bits n bits k bits

◮ The channel introduces errors

− → We add redundancy

  • 1C. Shannon - A mathematical theory of communication. The Bell System

Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

slide-4
SLIDE 4

Capacity of a classical channel

x Channel x’ m Encoding Decoding m’ k bits n bits n bits k bits

◮ The channel introduces errors

− → We add redundancy

◮ What is the highest rate R = k/n with Perr → 0?

− → It is the capacity of the channel. 1

  • 1C. Shannon - A mathematical theory of communication. The Bell System

Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

slide-5
SLIDE 5

Capacity of a classical channel

x Channel x’ m Encoding Decoding m’ k bits n bits n bits k bits

◮ The channel introduces errors

− → We add redundancy

◮ What is the highest rate R = k/n with Perr → 0?

− → It is the capacity of the channel. 1

◮ We want fast encoding and decoding

− → sparse codes − → IN COMPENSATION: a bit below the capacity.

  • 1C. Shannon - A mathematical theory of communication. The Bell System

Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

slide-6
SLIDE 6

Capacity of a classical channel

x Channel x’ m Encoding Decoding m’ k bits n bits n bits k bits

◮ The channel introduces errors

− → We add redundancy

◮ What is the highest rate R = k/n with Perr → 0?

− → It is the capacity of the channel. 1

◮ We want fast encoding and decoding

− → sparse codes − → IN COMPENSATION: a bit below the capacity.

◮ With stabilizers of small weight, we can use degeneracy.

  • 1C. Shannon - A mathematical theory of communication. The Bell System

Technical Journal, Vol. 27, pp. 379–423, 623–656, July, October, 1948.

slide-7
SLIDE 7

Plan

1 Capacity of the quantum erasure channel 1

Capacity of the quantum erasure channel

2

Stabilizer codes

3

A combinatorial proof

2 With sparse quantum codes 1

Expected rank of a random sparse submatrix

2

Achievable rates of sparse quantum codes over the QEC

3 An application to percolation theory 1

Kitaev’s toric code and percolation

2

Hyperbolic quantum codes

3

Bound on the critical probability

slide-8
SLIDE 8

Capacity of the quantum erasure channel

What is the highest rate R = k/n of quantum codes with Perr → 0?

Theorem (Bennet, DiVicenzo, Smolin - 97)

The capacity of the quantum erasure channel is 1 − 2p. Proved with no-cloning 2 − → independent of quantum codes properties. Goal: find a combinatorial proof and improve it for particular families of codes

  • 2C. H. Bennett, D. P. DiVincenzo, and J. A. Smolin - Capacities of Quantum

Erasure Channels. Phys. Rev. Lett. 78, 3217–3220 (1997)

slide-9
SLIDE 9

Stabilizer codes

◮ S =< S1, . . . , Sr > a stabilizer group of rank r. ◮ C(S) = Fix(S) is the stabilizer code. ◮ R = n−r n

is the rate of the stabilizer code.

◮ The syndrome of E ∈ Pn is σ(E) ∈ Fr 2 such that:

σi = 0 ⇔ E and Si commute . − → If E ′ ∈ S then E and EE ′ have the same effect. − → We can measure the syndrome. Using the syndrome, we search the most probable error.

slide-10
SLIDE 10

The quantum erasure channel

Each qubit is erased independently with proba p. erased qubit ← →

  • random Pauli error I, X, Y , Z

erased position known On n qubits: e ∈ Fn

2 denotes the erased positions

|ψ → E|ψ with Supp(E) ⊂ e (we write E ⊂ e)

◮ the erased positions are known: e ∈ Fn 2, ◮ the syndrome is known: σ(E) ∈ Fr 2, ◮ the error E ⊂ e is unknown.

To correct the state, we search an error E ⊂ e with syndrome σ.

slide-11
SLIDE 11

A combinatorial bound

H =   I X Z Y Z Z Z X I Z I Y Y Y Z   e =

  • 1

1

  • ◮ There are 42 errors E ⊂ e
slide-12
SLIDE 12

A combinatorial bound

He =   I X Z Y Z Z Z X I Z I Y Y Y Z   e =

  • 1

1

  • ◮ There are 42 errors E ⊂ e

◮ There are 22 syndromes σ(E)

with E ⊂ e

slide-13
SLIDE 13

A combinatorial bound

e =

  I X Z Y Z Z Z X I Z I Y Y Y Z   e =

  • 1

1

  • ◮ There are 42 errors E ⊂ e

◮ There are 22 syndromes σ(E)

with E ⊂ e

◮ There are 2 errors in each

degeneracy class

slide-14
SLIDE 14

A combinatorial bound

e =

  I X Z Y Z Z Z X I Z I Y Y Y Z   e =

  • 1

1

  • ◮ There are 42 errors E ⊂ e

◮ There are 22 syndromes σ(E)

with E ⊂ e

◮ There are 2 errors in each

degeneracy class − → e can not be corrected

slide-15
SLIDE 15

A combinatorial bound

e =

  I X Z Y Z Z Z X I Z I Y Y Y Z   e =

  • 1

1

  • ◮ There are 42 errors E ⊂ e

◮ There are 22 syndromes σ(E)

with E ⊂ e

◮ There are 2 errors in each

degeneracy class − → e can not be corrected

◮ 2rank He syndromes ◮ 2rank H−rank H¯

e errors, in each

class

Lemma

We can correct 2rank H−(rank H¯

e−rank He)

errors E ⊂ e.

slide-16
SLIDE 16

A combinatorial bound

Let (Ht) be a sequence of stabilizer matrices of rate R.

Theorem (D., Z´ emor - 2011)

If Perr → 0 then R ≤ 1 − 2p − g(p) ≤ 1 − 2p, where g(p) = lim sup Ep rank H¯

e − rank He

n

slide-17
SLIDE 17

A combinatorial bound

Let (Ht) be a sequence of stabilizer matrices of rate R.

Theorem (D., Z´ emor - 2011)

If Perr → 0 then R ≤ 1 − 2p − g(p) ≤ 1 − 2p, where g(p) = lim sup Ep rank H¯

e − rank He

n

  • ◮ For general stabilizer codes g(p) can be small (≈ 0)

◮ BUT for sparse matrices, this bound is below the capacity

Goal: estimate g(p) for sparse matrices

slide-18
SLIDE 18

Rank of a random sparse matrix

      He      

  • pn columns

◮ Typically: He is a r × np matrix

slide-19
SLIDE 19

Rank of a random sparse matrix

      He      

pn columns ◮ Typically: He is a r × np matrix

slide-20
SLIDE 20

Rank of a random sparse matrix

      He      

  • pn columns

◮ Typically: He is a r × np matrix

slide-21
SLIDE 21

Rank of a random sparse matrix

      He      

  • pn columns

◮ Typically: He is a r × np matrix

slide-22
SLIDE 22

Rank of a random sparse matrix

      He      

  • pn columns

◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank

− → g(p) is close 0

slide-23
SLIDE 23

Rank of a random sparse matrix

      Z X Z He      

  • pn columns

◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank

− → g(p) is close 0

slide-24
SLIDE 24

Rank of a random sparse matrix

      Z X Z He      

  • pn columns

◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank

− → g(p) is close 0

◮ BUT for a sparse matrix H, there are αn null rows in He

− → g(p) > λ − → Bound on achievable rates

slide-25
SLIDE 25

Rank of a random sparse matrix

      Z X Z He Z Y X      

  • pn columns

◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank

− → g(p) is close 0

◮ BUT for a sparse matrix H, there are αn null rows in He

− → g(p) > λ − → Bound on achievable rates

slide-26
SLIDE 26

Rank of a random sparse matrix

      Z X Z He Z Y X      

  • pn columns

◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank

− → g(p) is close 0

◮ BUT for a sparse matrix H, there are αn null rows in He

− → g(p) > λ − → Bound on achievable rates

◮ Similarly, there are βn identical rows of weight 1 ...

− → more accurate bound

slide-27
SLIDE 27

Achievable rates of sparse CSS codes

Theorem (D., Z´ emor - 2011)

Achievable rates of CSS(2, m) codes with dX, dZ ≥ 2δ + 1, over the quantum erasure channel of probability p satisfy: R ≤ 1 − 2p − g(p) ≤ (1 − 2p) 4 mp

  • 1 − (1 − p)mSδ(p(1 − p)m−2)
  • − 1
  • Sδ depends on the generating function for rooted subtrees in the

m-regular tree

slide-28
SLIDE 28

Achievable rates of sparse CSS codes

Figure: Bounds on achievable rates of CSS(2,8) codes with δ = 0 (blue) and δ = 30 (black)

slide-29
SLIDE 29

Kitaev’s toric code and percolation

Figure: The toric code

◮ It is a CSS(2, 4) code

Rows of HX = X X XX Rows of HZ = ZZ Z Z

Figure: The stabilizers

slide-30
SLIDE 30

Kitaev’s toric code and percolation

Figure: The toric code

◮ It is a CSS(2, 4) code ◮ An erasure is problematic iff it

covers an homological cycle Rows of HX = X X XX Rows of HZ = ZZ Z Z

Figure: The stabilizers

slide-31
SLIDE 31

Kitaev’s toric code and percolation

Figure: The toric code

◮ It is a CSS(2, 4) code ◮ An erasure is problematic iff it

covers an homological cycle Rows of HX = X X XX Rows of HZ = ZZ Z Z

Figure: The stabilizers

◮ What is the probability Pp of an

inifinite red cluster in Z2? There is a critical probability pc:

  • if p < pc, then Pp = 0

if p > pc, then Pp = 1

slide-32
SLIDE 32

Kitaev’s toric code and percolation

Figure: The toric code

◮ It is a CSS(2, 4) code ◮ An erasure is problematic iff it

covers an homological cycle Rows of HX = X X XX Rows of HZ = ZZ Z Z

Figure: The stabilizers

◮ What is the probability Pp of an

inifinite red cluster in Z2? There is a critical probability pc:

  • if p < pc, then Pp = 0

if p > pc, then Pp = 1

◮ For large graphs:

problematic erasure ≈ infinite cluster

slide-33
SLIDE 33

Hyperbolic percolation

Goal: connect percolation and quantum erasure for other graphs

Figure: A few faces of the 5-regular graph G(5)

Definition

G(m) is the self-dual m-regular tiling. The determination of pc(G(m)) is difficult. (Benjamini, Schramm, and later Baek, Kim, Minnhagen)

slide-34
SLIDE 34

Percolation and capacity

Using finite quotients of G(m) proposed by Siran in 2001, we constructed surface codes such that:

◮ locally look like to G(m) ◮ constant rate R ◮ sparse of type (2, m)

Main argument: if p < pc then R is under the capacity

slide-35
SLIDE 35

Percolation and capacity

The critical proba of G(m) satisfy: Using the capacity: R ≤ 1 − 2p:

Theorem (D., Z´ emor - 2010)

1 m − 1 ≤ pc ≤ 2 m. Using our bound on quantum LDPC codes:

Theorem (D., Z´ emor - 2011)

pc ≤ p where p is solution of: 1 − 4 m = (1 − 2p) 4 mp

  • 1 − (1 − p)mSm/2(p(1 − p)m−2)
  • − 1
slide-36
SLIDE 36

Numerical results

m

1 m−1 ≤ pc

improved bound: pc ≤ with the capacity: pc ≤ 2

m

5 0.25 0.38 0.40 10 0.11 0.17 0.20 20 0.053 0.073 0.100 30 0.035 0.046 0.067 40 0.026 0.034 0.050 50 0.020 0.026 0.040

slide-37
SLIDE 37

Numerical results

CONCLUSION: We obtained:

◮ Similar for CSS (ℓ, m) codes using hypergraphs ◮ Similar for stabilizer (ℓ, m) codes

OPEN QUESTIONS:

◮ With the depolarizing channel? ◮ Do stabilizer codes surpass CSS codes? ◮ What is exactly pc(G(m))?

slide-38
SLIDE 38

Numerical results

Questions? Thank you for your attention!