Bounds on achievable rates of sparse quantum codes over the quantum erasure channel
Nicolas Delfosse (with Gilles Z´ emor)
Institute of Mathematics - Univ. of Bordeaux - France
Bounds on achievable rates of sparse quantum codes over the quantum - - PowerPoint PPT Presentation
Bounds on achievable rates of sparse quantum codes over the quantum erasure channel Nicolas Delfosse (with Gilles Z emor) Institute of Mathematics - Univ. of Bordeaux - France Second Int. Conf. on Quantum Error Correction - QEC 11 USC Los
Institute of Mathematics - Univ. of Bordeaux - France
◮ The channel introduces errors
◮ The channel introduces errors
◮ The channel introduces errors
◮ What is the highest rate R = k/n with Perr → 0?
◮ The channel introduces errors
◮ What is the highest rate R = k/n with Perr → 0?
◮ We want fast encoding and decoding
◮ The channel introduces errors
◮ What is the highest rate R = k/n with Perr → 0?
◮ We want fast encoding and decoding
◮ With stabilizers of small weight, we can use degeneracy.
1 Capacity of the quantum erasure channel 1
2
3
2 With sparse quantum codes 1
2
3 An application to percolation theory 1
2
3
◮ S =< S1, . . . , Sr > a stabilizer group of rank r. ◮ C(S) = Fix(S) is the stabilizer code. ◮ R = n−r n
◮ The syndrome of E ∈ Pn is σ(E) ∈ Fr 2 such that:
2 denotes the erased positions
◮ the erased positions are known: e ∈ Fn 2, ◮ the syndrome is known: σ(E) ∈ Fr 2, ◮ the error E ⊂ e is unknown.
◮ There are 22 syndromes σ(E)
e =
◮ There are 22 syndromes σ(E)
◮ There are 2 errors in each
e =
◮ There are 22 syndromes σ(E)
◮ There are 2 errors in each
e =
◮ There are 22 syndromes σ(E)
◮ There are 2 errors in each
◮ 2rank He syndromes ◮ 2rank H−rank H¯
e errors, in each
e−rank He)
e − rank He
e − rank He
◮ BUT for sparse matrices, this bound is below the capacity
◮ Typically: He is a r × np matrix
pn columns ◮ Typically: He is a r × np matrix
◮ Typically: He is a r × np matrix
◮ Typically: He is a r × np matrix
◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank
◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank
◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank
◮ BUT for a sparse matrix H, there are αn null rows in He
◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank
◮ BUT for a sparse matrix H, there are αn null rows in He
◮ Typically: He is a r × np matrix ◮ When np = r, the square matrix He has almost full rank
◮ BUT for a sparse matrix H, there are αn null rows in He
◮ Similarly, there are βn identical rows of weight 1 ...
◮ It is a CSS(2, 4) code
◮ It is a CSS(2, 4) code ◮ An erasure is problematic iff it
◮ It is a CSS(2, 4) code ◮ An erasure is problematic iff it
◮ What is the probability Pp of an
◮ It is a CSS(2, 4) code ◮ An erasure is problematic iff it
◮ What is the probability Pp of an
◮ For large graphs:
◮ locally look like to G(m) ◮ constant rate R ◮ sparse of type (2, m)
1 m−1 ≤ pc
m
◮ Similar for CSS (ℓ, m) codes using hypergraphs ◮ Similar for stabilizer (ℓ, m) codes
◮ With the depolarizing channel? ◮ Do stabilizer codes surpass CSS codes? ◮ What is exactly pc(G(m))?