Communicating with Errors Someone sends you a message: As mmbrof - - PowerPoint PPT Presentation

communicating with errors
SMART_READER_LITE
LIVE PREVIEW

Communicating with Errors Someone sends you a message: As mmbrof - - PowerPoint PPT Presentation

Communicating with Errors Someone sends you a message: As mmbrof teGreek commniand art of n oft oranzins thsis hihly offesive. As you can see, parts of the message have been lost. Communicating with Errors Someone sends you a message:


slide-1
SLIDE 1

Communicating with Errors

Someone sends you a message: “’As mmbrof teGreek commniand art of n oft oranzins thsis hihly offesive.” As you can see, parts of the message have been lost.

slide-2
SLIDE 2

Communicating with Errors

Someone sends you a message: “’As mmbrof teGreek commniand art of n oft oranzins thsis hihly offesive.” As you can see, parts of the message have been lost. How can we transmit messages so that the receiver can recover the original message if there are errors?

slide-3
SLIDE 3

Communicating with Errors

Someone sends you a message: “’As mmbrof teGreek commniand art of n oft oranzins thsis hihly offesive.” As you can see, parts of the message have been lost. How can we transmit messages so that the receiver can recover the original message if there are errors? Today: Use polynomials to share secrets and correct errors.

slide-4
SLIDE 4

Review of Polynomials

◮ “d +1 distinct points uniquely determine a degree ≤ d

polynomial.”

slide-5
SLIDE 5

Review of Polynomials

◮ “d +1 distinct points uniquely determine a degree ≤ d

polynomial.”

◮ From the d +1 points we can find an interpolating

polynomial via Lagrange interpolation (or linear algebra).

slide-6
SLIDE 6

Review of Polynomials

◮ “d +1 distinct points uniquely determine a degree ≤ d

polynomial.”

◮ From the d +1 points we can find an interpolating

polynomial via Lagrange interpolation (or linear algebra).

◮ The results about polynomials hold over fields.

slide-7
SLIDE 7

Review of Polynomials

◮ “d +1 distinct points uniquely determine a degree ≤ d

polynomial.”

◮ From the d +1 points we can find an interpolating

polynomial via Lagrange interpolation (or linear algebra).

◮ The results about polynomials hold over fields.

Why do we use finite fields such as Z/pZ (p prime)?

slide-8
SLIDE 8

Review of Polynomials

◮ “d +1 distinct points uniquely determine a degree ≤ d

polynomial.”

◮ From the d +1 points we can find an interpolating

polynomial via Lagrange interpolation (or linear algebra).

◮ The results about polynomials hold over fields.

Why do we use finite fields such as Z/pZ (p prime)?

◮ Computations are fast.

slide-9
SLIDE 9

Review of Polynomials

◮ “d +1 distinct points uniquely determine a degree ≤ d

polynomial.”

◮ From the d +1 points we can find an interpolating

polynomial via Lagrange interpolation (or linear algebra).

◮ The results about polynomials hold over fields.

Why do we use finite fields such as Z/pZ (p prime)?

◮ Computations are fast. ◮ Computations are precise; no need for floating point

arithmetic.

slide-10
SLIDE 10

Review of Polynomials

◮ “d +1 distinct points uniquely determine a degree ≤ d

polynomial.”

◮ From the d +1 points we can find an interpolating

polynomial via Lagrange interpolation (or linear algebra).

◮ The results about polynomials hold over fields.

Why do we use finite fields such as Z/pZ (p prime)?

◮ Computations are fast. ◮ Computations are precise; no need for floating point

arithmetic.

◮ As a result, finite fields are reliable.

slide-11
SLIDE 11

Nuclear Bombs

Think about the password for America’s nuclear bombs.

slide-12
SLIDE 12

Nuclear Bombs

Think about the password for America’s nuclear bombs.

◮ “No one man should have all that power.” – Kanye West

slide-13
SLIDE 13

Nuclear Bombs

Think about the password for America’s nuclear bombs.

◮ “No one man should have all that power.” – Kanye West

For safety, we want to require k government officials to agree before the nuclear bomb password is revealed.

slide-14
SLIDE 14

Nuclear Bombs

Think about the password for America’s nuclear bombs.

◮ “No one man should have all that power.” – Kanye West

For safety, we want to require k government officials to agree before the nuclear bomb password is revealed.

◮ That is, if k government officials come together, they can

access the password.

slide-15
SLIDE 15

Nuclear Bombs

Think about the password for America’s nuclear bombs.

◮ “No one man should have all that power.” – Kanye West

For safety, we want to require k government officials to agree before the nuclear bomb password is revealed.

◮ That is, if k government officials come together, they can

access the password.

◮ But if k −1 or fewer officials come together, they cannot

access the password.

slide-16
SLIDE 16

Nuclear Bombs

Think about the password for America’s nuclear bombs.

◮ “No one man should have all that power.” – Kanye West

For safety, we want to require k government officials to agree before the nuclear bomb password is revealed.

◮ That is, if k government officials come together, they can

access the password.

◮ But if k −1 or fewer officials come together, they cannot

access the password. In fact, we will design something stronger.

slide-17
SLIDE 17

Nuclear Bombs

Think about the password for America’s nuclear bombs.

◮ “No one man should have all that power.” – Kanye West

For safety, we want to require k government officials to agree before the nuclear bomb password is revealed.

◮ That is, if k government officials come together, they can

access the password.

◮ But if k −1 or fewer officials come together, they cannot

access the password. In fact, we will design something stronger.

◮ If k −1 officials come together, they know nothing about

the password.

slide-18
SLIDE 18

Shamir’s Secret Sharing Scheme

Work in GF(p).

slide-19
SLIDE 19

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
slide-20
SLIDE 20

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}.
slide-21
SLIDE 21

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

slide-22
SLIDE 22

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

  • 3. For the ith government official, give him/her the share

(i,P(i)).

slide-23
SLIDE 23

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

  • 3. For the ith government official, give him/her the share

(i,P(i)). Correctness: If any k officials come together, they can interpolate to find the polynomial P.

slide-24
SLIDE 24

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

  • 3. For the ith government official, give him/her the share

(i,P(i)). Correctness: If any k officials come together, they can interpolate to find the polynomial P. Then evaluate P(0).

slide-25
SLIDE 25

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

  • 3. For the ith government official, give him/her the share

(i,P(i)). Correctness: If any k officials come together, they can interpolate to find the polynomial P. Then evaluate P(0).

◮ k people know the secret.

slide-26
SLIDE 26

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

  • 3. For the ith government official, give him/her the share

(i,P(i)). Correctness: If any k officials come together, they can interpolate to find the polynomial P. Then evaluate P(0).

◮ k people know the secret.

No Information: If k −1 officials come together, there are p possible polynomials that go through the k −1 shares.

slide-27
SLIDE 27

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

  • 3. For the ith government official, give him/her the share

(i,P(i)). Correctness: If any k officials come together, they can interpolate to find the polynomial P. Then evaluate P(0).

◮ k people know the secret.

No Information: If k −1 officials come together, there are p possible polynomials that go through the k −1 shares.

◮ But this is the same as number of possible secrets.

slide-28
SLIDE 28

Shamir’s Secret Sharing Scheme

Work in GF(p).

  • 1. Encode the secret s as a0.
  • 2. Pick a1,...,ak−1 randomly in {0,1,...,p −1}. This defines

a polynomial P(x) := ak−1xk−1 +···+a−1x +a0.

  • 3. For the ith government official, give him/her the share

(i,P(i)). Correctness: If any k officials come together, they can interpolate to find the polynomial P. Then evaluate P(0).

◮ k people know the secret.

No Information: If k −1 officials come together, there are p possible polynomials that go through the k −1 shares.

◮ But this is the same as number of possible secrets. ◮ The k −1 officials discover nothing new.

slide-29
SLIDE 29

Implementation of Secret Sharing

How large must the prime p be?

slide-30
SLIDE 30

Implementation of Secret Sharing

How large must the prime p be?

◮ Larger than the number of people involved.

slide-31
SLIDE 31

Implementation of Secret Sharing

How large must the prime p be?

◮ Larger than the number of people involved. ◮ Larger than the secret.

slide-32
SLIDE 32

Implementation of Secret Sharing

How large must the prime p be?

◮ Larger than the number of people involved. ◮ Larger than the secret.

If the secret s has n bits, then the secret is O(2n).

slide-33
SLIDE 33

Implementation of Secret Sharing

How large must the prime p be?

◮ Larger than the number of people involved. ◮ Larger than the secret.

If the secret s has n bits, then the secret is O(2n). So we need p > 2n.

slide-34
SLIDE 34

Implementation of Secret Sharing

How large must the prime p be?

◮ Larger than the number of people involved. ◮ Larger than the secret.

If the secret s has n bits, then the secret is O(2n). So we need p > 2n. The arithmetic is done with logp = O(n) bit numbers.

slide-35
SLIDE 35

Implementation of Secret Sharing

How large must the prime p be?

◮ Larger than the number of people involved. ◮ Larger than the secret.

If the secret s has n bits, then the secret is O(2n). So we need p > 2n. The arithmetic is done with logp = O(n) bit numbers. The runtime is a polynomial in the number of bits of the secret and the number of people, i.e., the scheme is efficient.

slide-36
SLIDE 36

Sending Packets

You want to send a long message.

slide-37
SLIDE 37

Sending Packets

You want to send a long message.

◮ In Internet communication, the message is divided up into

smaller chunks called packets.

slide-38
SLIDE 38

Sending Packets

You want to send a long message.

◮ In Internet communication, the message is divided up into

smaller chunks called packets.

◮ So say you want to send n packets, m0,m1,...,mn−1.

slide-39
SLIDE 39

Sending Packets

You want to send a long message.

◮ In Internet communication, the message is divided up into

smaller chunks called packets.

◮ So say you want to send n packets, m0,m1,...,mn−1. ◮ In information theory, we say that you send the packets

across a channel.

slide-40
SLIDE 40

Sending Packets

You want to send a long message.

◮ In Internet communication, the message is divided up into

smaller chunks called packets.

◮ So say you want to send n packets, m0,m1,...,mn−1. ◮ In information theory, we say that you send the packets

across a channel.

◮ What happens if the channel is imperfect?

slide-41
SLIDE 41

Sending Packets

You want to send a long message.

◮ In Internet communication, the message is divided up into

smaller chunks called packets.

◮ So say you want to send n packets, m0,m1,...,mn−1. ◮ In information theory, we say that you send the packets

across a channel.

◮ What happens if the channel is imperfect? ◮ First model: when you use the channel, it can drop any k

  • f your packets.
slide-42
SLIDE 42

Sending Packets

You want to send a long message.

◮ In Internet communication, the message is divided up into

smaller chunks called packets.

◮ So say you want to send n packets, m0,m1,...,mn−1. ◮ In information theory, we say that you send the packets

across a channel.

◮ What happens if the channel is imperfect? ◮ First model: when you use the channel, it can drop any k

  • f your packets.

Can we still communicate our message?

slide-43
SLIDE 43

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1).

slide-44
SLIDE 44

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1). What is degP?

slide-45
SLIDE 45

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1). What is degP? At most n −1.

slide-46
SLIDE 46

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1). What is degP? At most n −1. Remember: n points determine a degree ≤ n −1 polynomial.

slide-47
SLIDE 47

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1). What is degP? At most n −1. Remember: n points determine a degree ≤ n −1 polynomial. Then, send (0,P(0)),(1,P(1)),...,(n +k −1,P(n +k −1)) across the channel.

slide-48
SLIDE 48

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1). What is degP? At most n −1. Remember: n points determine a degree ≤ n −1 polynomial. Then, send (0,P(0)),(1,P(1)),...,(n +k −1,P(n +k −1)) across the channel.

◮ Note: If the channel drops packets, the receiver knows

which packets are dropped.

slide-49
SLIDE 49

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1). What is degP? At most n −1. Remember: n points determine a degree ≤ n −1 polynomial. Then, send (0,P(0)),(1,P(1)),...,(n +k −1,P(n +k −1)) across the channel.

◮ Note: If the channel drops packets, the receiver knows

which packets are dropped. Property of polynomials: If we receive any n packets, then we can interpolate to recover the message.

slide-50
SLIDE 50

Reed-Solomon Codes

Encode the packets m0,m1,...,mn−1 as values of a polynomial P(0),P(1),...,P(n −1). What is degP? At most n −1. Remember: n points determine a degree ≤ n −1 polynomial. Then, send (0,P(0)),(1,P(1)),...,(n +k −1,P(n +k −1)) across the channel.

◮ Note: If the channel drops packets, the receiver knows

which packets are dropped. Property of polynomials: If we receive any n packets, then we can interpolate to recover the message. If the channel drops at most k packets, we are safe.

slide-51
SLIDE 51

Alternative Encoding

The message has packets m0,m1,...,mn−1.

slide-52
SLIDE 52

Alternative Encoding

The message has packets m0,m1,...,mn−1. Instead of encoding the messages as values of the polynomial, we can encode it as coefficients of the polynomial.

slide-53
SLIDE 53

Alternative Encoding

The message has packets m0,m1,...,mn−1. Instead of encoding the messages as values of the polynomial, we can encode it as coefficients of the polynomial. P(x) = mn−1xn−1 +···+m1x +m0.

slide-54
SLIDE 54

Alternative Encoding

The message has packets m0,m1,...,mn−1. Instead of encoding the messages as values of the polynomial, we can encode it as coefficients of the polynomial. P(x) = mn−1xn−1 +···+m1x +m0. Then, send (0,P(0)),(1,P(1)),...,(n +k −1,P(n +k −1)) as before.

slide-55
SLIDE 55

Corruptions

Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.”

slide-56
SLIDE 56

Corruptions

Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased, letters are now corrupted.

slide-57
SLIDE 57

Corruptions

Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased, letters are now corrupted. These are called general errors.

slide-58
SLIDE 58

Corruptions

Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased, letters are now corrupted. These are called general errors. Can we still recover the original message?

slide-59
SLIDE 59

Corruptions

Now you receive the following message: “As d memkIrOcf tee GVwek tommcnity and X pZrt cf lneTof KVesZ oAcwWizytzoOs this ir higLly offensOvz.” Instead of letters being erased, letters are now corrupted. These are called general errors. Can we still recover the original message? In fact, Reed-Solomon codes still do the job!

slide-60
SLIDE 60

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1.

slide-61
SLIDE 61

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ.

slide-62
SLIDE 62

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n.

slide-63
SLIDE 63

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n. We want to encode the message into (Z/pZ)n+k.

slide-64
SLIDE 64

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n. We want to encode the message into (Z/pZ)n+k. The encoded message is longer, because redundancy recovers errors.

slide-65
SLIDE 65

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n. We want to encode the message into (Z/pZ)n+k. The encoded message is longer, because redundancy recovers errors. Let Encode : (Z/pZ)n → (Z/pZ)n+k be the encoding function.

slide-66
SLIDE 66

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n. We want to encode the message into (Z/pZ)n+k. The encoded message is longer, because redundancy recovers errors. Let Encode : (Z/pZ)n → (Z/pZ)n+k be the encoding function. Let C := range(Encode) be the set of codewords.

slide-67
SLIDE 67

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n. We want to encode the message into (Z/pZ)n+k. The encoded message is longer, because redundancy recovers errors. Let Encode : (Z/pZ)n → (Z/pZ)n+k be the encoding function. Let C := range(Encode) be the set of codewords. A codeword is a possible encoded message.

slide-68
SLIDE 68

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n. We want to encode the message into (Z/pZ)n+k. The encoded message is longer, because redundancy recovers errors. Let Encode : (Z/pZ)n → (Z/pZ)n+k be the encoding function. Let C := range(Encode) be the set of codewords. A codeword is a possible encoded message. We want the codewords to be far apart.

slide-69
SLIDE 69

A Broader Look at Coding

Suppose we want to send a length-n message, m0,m1,...,mn−1. Each packet is in Z/pZ. The message (m0,m1,...,mn−1) is in (Z/pZ)n. We want to encode the message into (Z/pZ)n+k. The encoded message is longer, because redundancy recovers errors. Let Encode : (Z/pZ)n → (Z/pZ)n+k be the encoding function. Let C := range(Encode) be the set of codewords. A codeword is a possible encoded message. We want the codewords to be far apart. Separated codewords means we can tolerate errors.

slide-70
SLIDE 70

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ.

slide-71
SLIDE 71

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

slide-72
SLIDE 72

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2.

slide-73
SLIDE 73

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2. ◮ Symmetry: d(s1,s2) = d(s2,s1).

slide-74
SLIDE 74

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2. ◮ Symmetry: d(s1,s2) = d(s2,s1). ◮ Triangle Inequality: d(s1,s3) ≤ d(s1,s2)+d(s2,s3).

slide-75
SLIDE 75

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2. ◮ Symmetry: d(s1,s2) = d(s2,s1). ◮ Triangle Inequality: d(s1,s3) ≤ d(s1,s2)+d(s2,s3).

Proof of Triangle Inequality:

slide-76
SLIDE 76

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2. ◮ Symmetry: d(s1,s2) = d(s2,s1). ◮ Triangle Inequality: d(s1,s3) ≤ d(s1,s2)+d(s2,s3).

Proof of Triangle Inequality:

◮ Start with s1.

slide-77
SLIDE 77

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2. ◮ Symmetry: d(s1,s2) = d(s2,s1). ◮ Triangle Inequality: d(s1,s3) ≤ d(s1,s2)+d(s2,s3).

Proof of Triangle Inequality:

◮ Start with s1. ◮ Change d(s1,s2) symbols to get s2.

slide-78
SLIDE 78

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2. ◮ Symmetry: d(s1,s2) = d(s2,s1). ◮ Triangle Inequality: d(s1,s3) ≤ d(s1,s2)+d(s2,s3).

Proof of Triangle Inequality:

◮ Start with s1. ◮ Change d(s1,s2) symbols to get s2. ◮ Change d(s2,s3) symbols to get s3.

slide-79
SLIDE 79

Hamming Distance

Given two strings s1 and s2, the Hamming distance d(s1,s2) between two strings is the number of places where they differ. Properties:

◮ d(s1,s2) ≥ 0, with equality if and only if s1 = s2. ◮ Symmetry: d(s1,s2) = d(s2,s1). ◮ Triangle Inequality: d(s1,s3) ≤ d(s1,s2)+d(s2,s3).

Proof of Triangle Inequality:

◮ Start with s1. ◮ Change d(s1,s2) symbols to get s2. ◮ Change d(s2,s3) symbols to get s3. ◮ So s1 and s3 differ by at most d(s1,s2)+d(s2,s3) symbols.

slide-80
SLIDE 80

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1.

slide-81
SLIDE 81

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

slide-82
SLIDE 82

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

◮ Suppose we send the codeword coriginal.

slide-83
SLIDE 83

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

◮ Suppose we send the codeword coriginal. ◮ It gets corrupted to a string s with d(coriginal,s) ≤ k.

slide-84
SLIDE 84

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

◮ Suppose we send the codeword coriginal. ◮ It gets corrupted to a string s with d(coriginal,s) ≤ k. ◮ Consider a different codeword cother.

slide-85
SLIDE 85

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

◮ Suppose we send the codeword coriginal. ◮ It gets corrupted to a string s with d(coriginal,s) ≤ k. ◮ Consider a different codeword cother. ◮ Then, d(coriginal,cother) ≤ d(coriginal,s)+d(s,cother).

slide-86
SLIDE 86

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

◮ Suppose we send the codeword coriginal. ◮ It gets corrupted to a string s with d(coriginal,s) ≤ k. ◮ Consider a different codeword cother. ◮ Then, d(coriginal,cother) ≤ d(coriginal,s)+d(s,cother). ◮ So, 2k +1 ≤ k +d(s,cother).

slide-87
SLIDE 87

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

◮ Suppose we send the codeword coriginal. ◮ It gets corrupted to a string s with d(coriginal,s) ≤ k. ◮ Consider a different codeword cother. ◮ Then, d(coriginal,cother) ≤ d(coriginal,s)+d(s,cother). ◮ So, 2k +1 ≤ k +d(s,cother). ◮ So, d(s,cother) ≥ k +1.

slide-88
SLIDE 88

Hamming Distance & Error Correction

Theorem: A code can recover k general errors if the minimum Hamming distance between any two distinct codewords is at least 2k +1. Proof.

◮ Suppose we send the codeword coriginal. ◮ It gets corrupted to a string s with d(coriginal,s) ≤ k. ◮ Consider a different codeword cother. ◮ Then, d(coriginal,cother) ≤ d(coriginal,s)+d(s,cother). ◮ So, 2k +1 ≤ k +d(s,cother). ◮ So, d(s,cother) ≥ k +1. ◮ So s is closer to coriginal than any other codeword.

slide-89
SLIDE 89

Reed-Solomon Codes Revisited

Given a message m = (m0,m1,...,mn−1). . .

slide-90
SLIDE 90

Reed-Solomon Codes Revisited

Given a message m = (m0,m1,...,mn−1). . .

◮ Define Pm(x) = mn−1xn−1 +···+m1x +m0.

slide-91
SLIDE 91

Reed-Solomon Codes Revisited

Given a message m = (m0,m1,...,mn−1). . .

◮ Define Pm(x) = mn−1xn−1 +···+m1x +m0. ◮ Send the codeword

(0,Pm(0)),(1,Pm(1)),...,(n +2k −1,Pm(n +2k −1)).

slide-92
SLIDE 92

Reed-Solomon Codes Revisited

Given a message m = (m0,m1,...,mn−1). . .

◮ Define Pm(x) = mn−1xn−1 +···+m1x +m0. ◮ Send the codeword

(0,Pm(0)),(1,Pm(1)),...,(n +2k −1,Pm(n +2k −1)). What are all the possible codewords?

slide-93
SLIDE 93

Reed-Solomon Codes Revisited

Given a message m = (m0,m1,...,mn−1). . .

◮ Define Pm(x) = mn−1xn−1 +···+m1x +m0. ◮ Send the codeword

(0,Pm(0)),(1,Pm(1)),...,(n +2k −1,Pm(n +2k −1)). What are all the possible codewords? All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1.

slide-94
SLIDE 94

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1.

slide-95
SLIDE 95

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1. What is the minimum Hamming distance between distinct codewords?

slide-96
SLIDE 96

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c1: (0,P1(0)),(1,P1(0)),...,(n +2k −1,P1(n +2k −1)) c2: (0,P2(0)),(1,P2(0)),...,(n +2k −1,P2(n +2k −1))

slide-97
SLIDE 97

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c1: (0,P1(0)),(1,P1(0)),...,(n +2k −1,P1(n +2k −1)) c2: (0,P2(0)),(1,P2(0)),...,(n +2k −1,P2(n +2k −1)) If d(c1,c2) ≤ 2k, then:

slide-98
SLIDE 98

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c1: (0,P1(0)),(1,P1(0)),...,(n +2k −1,P1(n +2k −1)) c2: (0,P2(0)),(1,P2(0)),...,(n +2k −1,P2(n +2k −1)) If d(c1,c2) ≤ 2k, then: P1 and P2 share n points.

slide-99
SLIDE 99

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c1: (0,P1(0)),(1,P1(0)),...,(n +2k −1,P1(n +2k −1)) c2: (0,P2(0)),(1,P2(0)),...,(n +2k −1,P2(n +2k −1)) If d(c1,c2) ≤ 2k, then: P1 and P2 share n points. But n points uniquely determine degree ≤ n −1 polynomials.

slide-100
SLIDE 100

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c1: (0,P1(0)),(1,P1(0)),...,(n +2k −1,P1(n +2k −1)) c2: (0,P2(0)),(1,P2(0)),...,(n +2k −1,P2(n +2k −1)) If d(c1,c2) ≤ 2k, then: P1 and P2 share n points. But n points uniquely determine degree ≤ n −1 polynomials. So P1 = P2.

slide-101
SLIDE 101

Hamming Distance of Reed-Solomon Codes

Codewords: All possible sets of n +2k points, which come from a polynomial of degree ≤ n −1. What is the minimum Hamming distance between distinct codewords? Consider two codewords: c1: (0,P1(0)),(1,P1(0)),...,(n +2k −1,P1(n +2k −1)) c2: (0,P2(0)),(1,P2(0)),...,(n +2k −1,P2(n +2k −1)) If d(c1,c2) ≤ 2k, then: P1 and P2 share n points. But n points uniquely determine degree ≤ n −1 polynomials. So P1 = P2. The minimum Hamming distance is 2k +1.

slide-102
SLIDE 102

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords.

slide-103
SLIDE 103

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors.

slide-104
SLIDE 104

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors. What is the decoding algorithm?

slide-105
SLIDE 105

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors. What is the decoding algorithm?

◮ Take your message m = (m0,m1,...,mn−1).

slide-106
SLIDE 106

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors. What is the decoding algorithm?

◮ Take your message m = (m0,m1,...,mn−1). ◮ Define P(x) = mn−1xn−1 +···+m1x +m0.

slide-107
SLIDE 107

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors. What is the decoding algorithm?

◮ Take your message m = (m0,m1,...,mn−1). ◮ Define P(x) = mn−1xn−1 +···+m1x +m0. ◮ Send codeword

(0,P(0)),(1,P(1)),...,(n +2k −1,P(n +2k −1)).

slide-108
SLIDE 108

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors. What is the decoding algorithm?

◮ Take your message m = (m0,m1,...,mn−1). ◮ Define P(x) = mn−1xn−1 +···+m1x +m0. ◮ Send codeword

(0,P(0)),(1,P(1)),...,(n +2k −1,P(n +2k −1)).

◮ The codeword suffers at most k corruptions.

slide-109
SLIDE 109

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors. What is the decoding algorithm?

◮ Take your message m = (m0,m1,...,mn−1). ◮ Define P(x) = mn−1xn−1 +···+m1x +m0. ◮ Send codeword

(0,P(0)),(1,P(1)),...,(n +2k −1,P(n +2k −1)).

◮ The codeword suffers at most k corruptions. ◮ Receiver decodes by searching for the closest codeword to

the received message.

slide-110
SLIDE 110

General Errors with Reed-Solomon Codes

Reed-Solomon with n +2k packets gives a code with minimum Hamming distance ≥ 2k +1 between distinct codewords. By our theorem, this can correct k general errors. What is the decoding algorithm?

◮ Take your message m = (m0,m1,...,mn−1). ◮ Define P(x) = mn−1xn−1 +···+m1x +m0. ◮ Send codeword

(0,P(0)),(1,P(1)),...,(n +2k −1,P(n +2k −1)).

◮ The codeword suffers at most k corruptions. ◮ Receiver decodes by searching for the closest codeword to

the received message. Can we avoid exhaustive search?

slide-111
SLIDE 111

Berlekamp-Welch Decoding Algorithm

Berlekamp and Welch patented an efficient decoding algorithm for Reed-Solomon codes.

slide-112
SLIDE 112

Berlekamp-Welch Decoding Algorithm

Berlekamp and Welch patented an efficient decoding algorithm for Reed-Solomon codes. Let R0,R1,...,Rn−2k+1 be the received packets.

slide-113
SLIDE 113

Berlekamp-Welch Decoding Algorithm

Berlekamp and Welch patented an efficient decoding algorithm for Reed-Solomon codes. Let R0,R1,...,Rn−2k+1 be the received packets. These packets are potentially corrupted!

slide-114
SLIDE 114

Berlekamp-Welch Decoding Algorithm

Berlekamp and Welch patented an efficient decoding algorithm for Reed-Solomon codes. Let R0,R1,...,Rn−2k+1 be the received packets. These packets are potentially corrupted! Suppose there are errors at the values e1,...,ek.

slide-115
SLIDE 115

Berlekamp-Welch Decoding Algorithm

Berlekamp and Welch patented an efficient decoding algorithm for Reed-Solomon codes. Let R0,R1,...,Rn−2k+1 be the received packets. These packets are potentially corrupted! Suppose there are errors at the values e1,...,ek. The error locator polynomial is: E(x) = (x −e1)···(x −ek).

slide-116
SLIDE 116

Berlekamp-Welch Decoding Algorithm

Berlekamp and Welch patented an efficient decoding algorithm for Reed-Solomon codes. Let R0,R1,...,Rn−2k+1 be the received packets. These packets are potentially corrupted! Suppose there are errors at the values e1,...,ek. The error locator polynomial is: E(x) = (x −e1)···(x −ek). The roots of E are the locations of the errors.

slide-117
SLIDE 117

Berlekamp-Welch Decoding Algorithm

Berlekamp and Welch patented an efficient decoding algorithm for Reed-Solomon codes. Let R0,R1,...,Rn−2k+1 be the received packets. These packets are potentially corrupted! Suppose there are errors at the values e1,...,ek. The error locator polynomial is: E(x) = (x −e1)···(x −ek). The roots of E are the locations of the errors. Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i).

slide-118
SLIDE 118

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i).

slide-119
SLIDE 119

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

slide-120
SLIDE 120

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

◮ Case 1: i is an error.

slide-121
SLIDE 121

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

◮ Case 1: i is an error. Then, E(i) = 0.

slide-122
SLIDE 122

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

◮ Case 1: i is an error. Then, E(i) = 0. Both sides are zero.

slide-123
SLIDE 123

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

◮ Case 1: i is an error. Then, E(i) = 0. Both sides are zero. ◮ Case 2: i is not an error.

slide-124
SLIDE 124

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

◮ Case 1: i is an error. Then, E(i) = 0. Both sides are zero. ◮ Case 2: i is not an error. Then, P(i) = Ri.

slide-125
SLIDE 125

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

◮ Case 1: i is an error. Then, E(i) = 0. Both sides are zero. ◮ Case 2: i is not an error. Then, P(i) = Ri.

Multiplying by the error locator polynomial “nullifies” the corruptions.

slide-126
SLIDE 126

Berlekamp-Welch Lemma

Key Lemma: For all i = 0,1,...,n +2k −1, we have: P(i)E(i) = RiE(i). Proof.

◮ Case 1: i is an error. Then, E(i) = 0. Both sides are zero. ◮ Case 2: i is not an error. Then, P(i) = Ri.

Multiplying by the error locator polynomial “nullifies” the corruptions. Problem: We do not know the locations of the errors.

slide-127
SLIDE 127

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1.

slide-128
SLIDE 128

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1. Since degE = k, then E(x) = xk +ak−1xk−1 +···+a1x +a0 for k unknown coefficients a0,a1,...,ak−1.

slide-129
SLIDE 129

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1. Since degE = k, then E(x) = xk +ak−1xk−1 +···+a1x +a0 for k unknown coefficients a0,a1,...,ak−1. Note: Leading coefficient is one!

slide-130
SLIDE 130

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1. Since degE = k, then E(x) = xk +ak−1xk−1 +···+a1x +a0 for k unknown coefficients a0,a1,...,ak−1. Note: Leading coefficient is one! Define Q(x) := P(x)E(x).

slide-131
SLIDE 131

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1. Since degE = k, then E(x) = xk +ak−1xk−1 +···+a1x +a0 for k unknown coefficients a0,a1,...,ak−1. Note: Leading coefficient is one! Define Q(x) := P(x)E(x). Then, degQ = degE +degP = n +k −1.

slide-132
SLIDE 132

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1. Since degE = k, then E(x) = xk +ak−1xk−1 +···+a1x +a0 for k unknown coefficients a0,a1,...,ak−1. Note: Leading coefficient is one! Define Q(x) := P(x)E(x). Then, degQ = degE +degP = n +k −1. So Q(x) = bn+k−1xn+k−1 +···+b1x +b0 for n +k unknown coefficients b0,b1,...,bn+k−1.

slide-133
SLIDE 133

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1. Since degE = k, then E(x) = xk +ak−1xk−1 +···+a1x +a0 for k unknown coefficients a0,a1,...,ak−1. Note: Leading coefficient is one! Define Q(x) := P(x)E(x). Then, degQ = degE +degP = n +k −1. So Q(x) = bn+k−1xn+k−1 +···+b1x +b0 for n +k unknown coefficients b0,b1,...,bn+k−1. We have n +2k unknown coefficients.

slide-134
SLIDE 134

Berlekamp-Welch Decoding

P(i)E(i) = RiE(i) for i = 0,1,...,n +2k −1. Since degE = k, then E(x) = xk +ak−1xk−1 +···+a1x +a0 for k unknown coefficients a0,a1,...,ak−1. Note: Leading coefficient is one! Define Q(x) := P(x)E(x). Then, degQ = degE +degP = n +k −1. So Q(x) = bn+k−1xn+k−1 +···+b1x +b0 for n +k unknown coefficients b0,b1,...,bn+k−1. We have n +2k unknown coefficients. But we also have n +2k equations!

slide-135
SLIDE 135

The Equations Are Linear

Unknowns: a0,a1,...,ak−1,b0,b1,...,bn+k−1. Equations: Q(i) = RiE(i) for i = 0,1,...,n +2k −1.

slide-136
SLIDE 136

The Equations Are Linear

Unknowns: a0,a1,...,ak−1,b0,b1,...,bn+k−1. Equations: Q(i) = RiE(i) for i = 0,1,...,n +2k −1. Equations, again: bn+k−1in+k−1 +···+b1i +b0 = Ri(ik +ak−1ik−1 +···+a1i +a0).

slide-137
SLIDE 137

The Equations Are Linear

Unknowns: a0,a1,...,ak−1,b0,b1,...,bn+k−1. Equations: Q(i) = RiE(i) for i = 0,1,...,n +2k −1. Equations, again: bn+k−1in+k−1 +···+b1i +b0 = Ri(ik +ak−1ik−1 +···+a1i +a0). The equations are linear in the unknown variables.

slide-138
SLIDE 138

The Equations Are Linear

Unknowns: a0,a1,...,ak−1,b0,b1,...,bn+k−1. Equations: Q(i) = RiE(i) for i = 0,1,...,n +2k −1. Equations, again: bn+k−1in+k−1 +···+b1i +b0 = Ri(ik +ak−1ik−1 +···+a1i +a0). The equations are linear in the unknown variables. Solve the linear system using methods from linear algebra.

slide-139
SLIDE 139

The Equations Are Linear

Unknowns: a0,a1,...,ak−1,b0,b1,...,bn+k−1. Equations: Q(i) = RiE(i) for i = 0,1,...,n +2k −1. Equations, again: bn+k−1in+k−1 +···+b1i +b0 = Ri(ik +ak−1ik−1 +···+a1i +a0). The equations are linear in the unknown variables. Solve the linear system using methods from linear algebra. Gaussian elimination.

slide-140
SLIDE 140

The Equations Are Linear

Unknowns: a0,a1,...,ak−1,b0,b1,...,bn+k−1. Equations: Q(i) = RiE(i) for i = 0,1,...,n +2k −1. Equations, again: bn+k−1in+k−1 +···+b1i +b0 = Ri(ik +ak−1ik−1 +···+a1i +a0). The equations are linear in the unknown variables. Solve the linear system using methods from linear algebra. Gaussian elimination. Note: Linear algebra works over fields.

slide-141
SLIDE 141

Recovering the Encoding Polynomial

Solve a linear system, recover the coefficients of E and Q.

slide-142
SLIDE 142

Recovering the Encoding Polynomial

Solve a linear system, recover the coefficients of E and Q. Note that Q(x) = P(x)E(x), so we recover:

slide-143
SLIDE 143

Recovering the Encoding Polynomial

Solve a linear system, recover the coefficients of E and Q. Note that Q(x) = P(x)E(x), so we recover: P(x) = Q(x) E(x).

slide-144
SLIDE 144

Recovering the Encoding Polynomial

Solve a linear system, recover the coefficients of E and Q. Note that Q(x) = P(x)E(x), so we recover: P(x) = Q(x) E(x). We have recovered the polynomial P, and therefore the message.

slide-145
SLIDE 145

Recovering the Encoding Polynomial

Solve a linear system, recover the coefficients of E and Q. Note that Q(x) = P(x)E(x), so we recover: P(x) = Q(x) E(x). We have recovered the polynomial P, and therefore the message. The Berlekamp-Welch decoding algorithm is more efficient.

slide-146
SLIDE 146

Recovering the Encoding Polynomial

Solve a linear system, recover the coefficients of E and Q. Note that Q(x) = P(x)E(x), so we recover: P(x) = Q(x) E(x). We have recovered the polynomial P, and therefore the message. The Berlekamp-Welch decoding algorithm is more efficient.

◮ Solving a linear system is much faster than exhaustive

search of codewords.

slide-147
SLIDE 147

Recovering the Encoding Polynomial

Solve a linear system, recover the coefficients of E and Q. Note that Q(x) = P(x)E(x), so we recover: P(x) = Q(x) E(x). We have recovered the polynomial P, and therefore the message. The Berlekamp-Welch decoding algorithm is more efficient.

◮ Solving a linear system is much faster than exhaustive

search of codewords.

◮ With more tricks, we can reduce the linear system (with

n +2k equations) into a system with only k equations.

slide-148
SLIDE 148

Unique Solution?

Is the solution to the linear system unique?

slide-149
SLIDE 149

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors.

slide-150
SLIDE 150

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q?

slide-151
SLIDE 151

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x).

slide-152
SLIDE 152

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x). Proof.

slide-153
SLIDE 153

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x). Proof.

◮ Let (E,Q) be any solution to the linear system.

slide-154
SLIDE 154

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x). Proof.

◮ Let (E,Q) be any solution to the linear system. So,

Q(i) = RiE(i) for n +2k values of i.

slide-155
SLIDE 155

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x). Proof.

◮ Let (E,Q) be any solution to the linear system. So,

Q(i) = RiE(i) for n +2k values of i.

◮ There are at most k errors so Ri = P(i) for at least n +k

values of i.

slide-156
SLIDE 156

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x). Proof.

◮ Let (E,Q) be any solution to the linear system. So,

Q(i) = RiE(i) for n +2k values of i.

◮ There are at most k errors so Ri = P(i) for at least n +k

values of i.

◮ So Q(i) = P(i)E(i) for n +k values of i.

slide-157
SLIDE 157

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x). Proof.

◮ Let (E,Q) be any solution to the linear system. So,

Q(i) = RiE(i) for n +2k values of i.

◮ There are at most k errors so Ri = P(i) for at least n +k

values of i.

◮ So Q(i) = P(i)E(i) for n +k values of i. But these are

degree n +k −1 polynomials.

slide-158
SLIDE 158

Unique Solution?

Is the solution to the linear system unique? Not if there are fewer than k errors. Can we solve for the “wrong” E and Q? Theorem: Any solutions E and Q have Q(x)/E(x) = P(x). Proof.

◮ Let (E,Q) be any solution to the linear system. So,

Q(i) = RiE(i) for n +2k values of i.

◮ There are at most k errors so Ri = P(i) for at least n +k

values of i.

◮ So Q(i) = P(i)E(i) for n +k values of i. But these are

degree n +k −1 polynomials.

◮ So Q(x) = P(x)E(x) for all x.

slide-159
SLIDE 159

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1.

slide-160
SLIDE 160

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1. Where are the corrupted packets?

slide-161
SLIDE 161

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1. Where are the corrupted packets? Brute force approach:

slide-162
SLIDE 162

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1. Where are the corrupted packets? Brute force approach:

◮ We will learn counting soon.

slide-163
SLIDE 163

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1. Where are the corrupted packets? Brute force approach:

◮ We will learn counting soon. ◮ There are

n+2k

k

  • subsets of R0,R1,...,Rn+2k−1.
slide-164
SLIDE 164

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1. Where are the corrupted packets? Brute force approach:

◮ We will learn counting soon. ◮ There are

n+2k

k

  • subsets of R0,R1,...,Rn+2k−1.

◮ For each such subset, try fitting a polynomial of degree

≤ n −1 which fits the remaining n +k points.

slide-165
SLIDE 165

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1. Where are the corrupted packets? Brute force approach:

◮ We will learn counting soon. ◮ There are

n+2k

k

  • subsets of R0,R1,...,Rn+2k−1.

◮ For each such subset, try fitting a polynomial of degree

≤ n −1 which fits the remaining n +k points.

◮ It is possible to bound:

n +2k k

n +2k k k .

slide-166
SLIDE 166

Comparison with Brute Force

Receive R0,R1,...,Rn+2k−1. Where are the corrupted packets? Brute force approach:

◮ We will learn counting soon. ◮ There are

n+2k

k

  • subsets of R0,R1,...,Rn+2k−1.

◮ For each such subset, try fitting a polynomial of degree

≤ n −1 which fits the remaining n +k points.

◮ It is possible to bound:

n +2k k

n +2k k k . The complexity grows exponentially with k.

slide-167
SLIDE 167

Summary

◮ Two ways to encode information in a polynomial: as

values, or as coefficients.

◮ Secret sharing: Encode secret in polynomial, hand out

“shares” of the polynomial to officials.

◮ If any k officials come together, they know the secret, but

k −1 officials know nothing.

◮ If minimum Hamming distance between distinct codewords

is 2k +1, then correct k general errors.

◮ Reed-Solomon codes: Interpolate a polynomial through n

packets and send values of the polynomial.

◮ To correct k erasure errors, send n +k. ◮ To correct k general errors, send n +2k.

◮ The error locator polynomial E has a root at every error. ◮ Berlekamp-Welch decoding: Q(x) = P(x)E(x), solve for

the coefficients of E and Q using Q(i) = RiE(i).