3. Examples Ancient Egyptian Multiplication Example on how to show - - PowerPoint PPT Presentation

3 examples
SMART_READER_LITE
LIVE PREVIEW

3. Examples Ancient Egyptian Multiplication Example on how to show - - PowerPoint PPT Presentation

3.1 Ancient Egyptian Multiplication 3. Examples Ancient Egyptian Multiplication Example on how to show correctness of Show Correctness, Recursion and Recurrences algorithms. [References to literatur at the examples] 60 61 Ancient Egyptian


slide-1
SLIDE 1
  • 3. Examples

Show Correctness, Recursion and Recurrences [References to literatur at the examples]

60

3.1 Ancient Egyptian Multiplication

Ancient Egyptian Multiplication– Example on how to show correctness of algorithms.

61

Ancient Egyptian Multiplication3

Compute 11 · 9

11 9 22 4 44 2 88 1 99 − 9 11 18 5 36 2 72 1 99

1 Double left, integer division

by 2 on the right

2 Even number on the right ⇒

eliminate row.

3 Add remaining rows on the

left.

3Also known as russian multiplication 62

Advantages

Short description, easy to grasp Efficient to implement on a computer: double = left shift, divide by 2 = right shift Beispiel left shift

9 = 010012 → 100102 = 18

right shift

9 = 010012 → 001002 = 4

63

slide-2
SLIDE 2

Questions

For which kind of inputs does the algorithm deliver a correct result (in finite time)? How do you prove its correctness? What is a good measure for Efficiency?

64

The Essentials

If b > 1, a ∈ ❩, then:

a · b =

  • 2a · b

2

falls b gerade,

a + 2a · b−1

2

falls b ungerade.

65

Termination

a · b =      a

falls b = 1,

2a · b

2

falls b gerade,

a + 2a · b−1

2

falls b ungerade.

66

Recursively, Functional

f(a, b) =      a

falls b = 1,

f(2a, b

2)

falls b gerade,

a + f(2a, b−1

2 )

falls b ungerade.

67

slide-3
SLIDE 3

Implemented as a function

// pre: b>0 // post: return a∗b int f(int a, int b){ if(b==1) return a; else if (b%2 == 0) return f(2∗a, b/2); else return a + f(2∗a, (b−1)/2); }

68

Correctnes: Mathematical Proof

f(a, b) =      a

if b = 1,

f(2a, b

2)

if b even,

a + f(2a · b−1

2 )

if b odd. Remaining to show: f(a, b) = a · b for a ∈ ❩, b ∈ ◆+.

69

Correctnes: Mathematical Proof by Induction

Let a ∈ ❩, to show f(a, b) = a · b

∀ b ∈ ◆+.

Base clause: f(a, 1) = a = a · 1 Hypothesis: f(a, b′) = a · b′

∀ 0 < b′ ≤ b

Step: f(a, b′) = a · b′

∀ 0 < b′ ≤ b

!

⇒ f(a, b + 1) = a · (b + 1) f(a, b + 1) =              f(2a,

0<·≤b

b + 1 2 )

i.H.

= a · (b + 1)

if b > 0 odd,

a + f(2a, b 2

  • 0<·<b

)

i.H.

= a + a · b

if b > 0 even.

  • 70

[Code Transformations: End Recursion]

The recursion can be writen as end recursion

// pre: b>0 // post: return a∗b int f(int a, int b){ if (b==1) return a; else if (b%2 == 0) return f(2∗a, b/2); else return a + f(2∗a, (b−1)/2); } // pre: b>0 // post: return a∗b int f(int a, int b){ if (b==1) return a; int z=0; if (b%2 != 0){ −−b; z=a; } return z + f(2∗a, b/2); }

71

slide-4
SLIDE 4

[Code-Transformation: End-Recursion ⇒ Iteration]

// pre: b>0 // post: return a∗b int f(int a, int b){ if (b==1) return a; int z=0; if (b%2 != 0){ −−b; z=a; } return z + f(2∗a, b/2); } int f(int a, int b) { int res = 0; while (b != 1) { int z = 0; if (b % 2 != 0){ −−b; z = a; } res += z; a ∗= 2; // neues a b /= 2; // neues b } res += a; // Basisfall b=1 return res ; }

72

[Code-Transformation: Simplify]

int f(int a, int b) { int res = 0; while (b != 1) { int z = 0; if (b % 2 != 0){ −−b; z = a; } res += z; a ∗= 2; b /= 2; } res += a; return res ; }

Direkt in res Teil der Division in den Loop

// pre: b>0 // post: return a∗b int f(int a, int b) { int res = 0; while (b > 0) { if (b % 2 != 0) res += a; a ∗= 2; b /= 2; } return res ; }

73

Correctness: Reasoning using Invariants!

// pre: b>0 // post: return a∗b int f(int a, int b) { int res = 0; while (b > 0) { if (b % 2 != 0){ res += a; −−b; } a ∗= 2; b /= 2; } return res; }

Sei x := a · b. here: x = a · b + res if here x = a · b + res ... ... then also here x = a · b + res

b even

here: x = a · b + res here: x = a · b + res und b = 0 Also res = x.

74

Conclusion

The expression a · b + res is an invariant Values of a, b, res change but the invariant remains basically unchanged: The invariant is only temporarily discarded by some statement but then re-established. If such short statement sequences are considered atomiv, the value remains indeed invariant In particular the loop contains an invariant, called loop invariant and it operates there like the induction step in induction proofs. Invariants are obviously powerful tools for proofs!

75

slide-5
SLIDE 5

[Further simplification]

// pre: b>0 // post: return a∗b int f(int a, int b) { int res = 0; while (b > 0) { if (b % 2 != 0){ res += a; −−b; } a ∗= 2; b /= 2; } return res ; } // pre: b>0 // post: return a∗b int f(int a, int b) { int res = 0; while (b > 0) { res += a ∗ (b%2); a ∗= 2; b /= 2; } return res ; }

77

[Analysis]

// pre: b>0 // post: return a∗b int f(int a, int b) { int res = 0; while (b > 0) { res += a ∗ (b%2); a ∗= 2; b /= 2; } return res ; }

Ancient Egyptian Multiplication corre- sponds to the school method with radix 2.

1 1 × 1 1 1 1 1 (9) 1 1 (18) 1 1 1 1 1 1 (72) 1 1 1 1 (99)

78

Efficiency

Question: how long does a multiplication of a and b take? Measure for efficiency

Total number of fundamental operations: double, divide by 2, shift, test for “even”, addition In the recursive and recursive code: maximally 6 operations per call or iteration, respectively

Essential criterion:

Number of recursion calls or Number iterations (in the iterative case)

b 2n ≤ 1 holds for n ≥ log2 b. Consequently not more than 6⌈log2 b⌉

fundamental operations.

79

3.2 Fast Integer Multiplication

[Ottman/Widmayer, Kap. 1.2.3]

80

slide-6
SLIDE 6

Example 2: Multiplication of large Numbers

Primary school:

a b c d 6 2 · 3 7 1 4 d · b 4 2 d · a 6 c · b 1 8 c · a = 2 2 9 4 2 · 2 = 4 single-digit multiplications. ⇒ Multiplication of two n-digit

numbers: n2 single-digit multiplications

81

Observation

ab · cd = (10 · a + b) · (10 · c + d) = 100 · a · c + 10 · a · c + 10 · b · d + b · d + 10 · (a − b) · (d − c)

82

Improvement?

a b c d 6 2 · 3 7 1 4 d · b 1 4 d · b 1 6 (a − b) · (d − c) 1 8 c · a 1 8 c · a = 2 2 9 4 → 3 single-digit multiplications.

83

Large Numbers

6237 · 5898 = 62

  • a′

37

  • b′

· 58

  • c′

98

  • d′

Recursive / inductive application: compute a′ · c′, a′ · d′, b′ · c′ and

c′ · d′ as shown above. → 3 · 3 = 9 instead of 16 single-digit multiplications.

84

slide-7
SLIDE 7

Generalization

Assumption: two numbers with n digits each, n = 2k for some k.

(10n/2a + b) · (10n/2c + d) = 10n · a · c + 10n/2 · a · c + 10n/2 · b · d + b · d + 10n/2 · (a − b) · (d − c)

Recursive application of this formula: algorithm by Karatsuba and Ofman (1962).

85

Analysis

M(n): Number of single-digit multiplications.

Recursive application of the algorithm from above ⇒ recursion equality:

M(2k) =

  • 1

if k = 0,

3 · M(2k−1)

if k > 0.

86

Iterative Substition

Iterative substition of the recursion formula in order to guess a solution of the recursion formula:

M(2k) = 3 · M(2k−1) = 3 · 3 · M(2k−2) = 32 · M(2k−2) = . . .

!

= 3k · M(20) = 3k.

87

Proof: induction

Hypothesis H:

M(2k) = 3k.

Base clause (k = 0):

M(20) = 30 = 1.

  • Induction step (k → k + 1):

M(2k+1)

def

= 3 · M(2k)

H

= 3 · 3k = 3k+1.

  • 88
slide-8
SLIDE 8

Comparison

Traditionally n2 single-digit multiplications. Karatsuba/Ofman:

M(n) = 3log2 n = (2log2 3)log2 n = 2log2 3 log2 n = nlog2 3 ≈ n1.58.

Example: number with 1000 digits: 10002/10001.58 ≈ 18.

89

Best possible algorithm?

We only know the upper bound nlog2 3. There are (for large n) practically relevant algorithms that are faster. Example: Schönhage-Strassen algorithm (1971) based on fast Fouriertransformation with running time O(n log n · log log n). The best upper bound is not known. Lower bound: n. Each digit has to be considered at least once.

90

Appendix: Asymptotics with Addition and Shifts

For each multiplication of two n-digit numbers we also should take into account a constant number of additions, subtractions and shifts Additions, subtractions and shifts of n-digit numbers cost O(n) Therefore the asymptotic running time is determined (with some

c > 1) by the following recurrence T(n) =

  • 3 · T

1

2n

  • + c · n

if n > 1

1

  • therwise

91

Appendix: Asymptotics with Addition and Shifts

Assumption: n = 2k, k > 0

T(2k) = 3 · T

  • 2k−1

+ c · 2k = 3 · (3 · T(2k−2) + c · 2k−1) + c · 2k = 3 · (3 · (3 · T(2k−3) + c · 2k−2) + c · 2k−1) + c · 2k = 3 · (3 · (...(3 · T(2k−k) + c · 21)...) + c · 2k−1) + c · 2k = 3k · T(1) + c · 3k−121 + c · 3k−222 + ... + c · 302k ≤ c · 3k · (1 + 2/3 + (2/3)2 + ... + (2/3)k)

Die geometrische Reihe k

i=0 ̺i mit ̺ = 2/3 konvergiert für k → ∞ gegen 1 1−̺ = 3.

Somit T(2k) ≤ c · 3k · 3 ∈ Θ(3k) = Θ(3log2 n) = Θ(nlog2 3).

92

slide-9
SLIDE 9

3.3 Maximum Subarray Problem

Algorithm Design – Maximum Subarray Problem [Ottman/Widmayer, Kap. 1.3] Divide and Conquer [Ottman/Widmayer, Kap. 1.2.2. S.9; Cormen et al, Kap. 4-4.1]

93

Algorithm Design

Inductive development of an algorithm: partition into subproblems, use solutions for the subproblems to find the overal solution. Goal: development of the asymptotically most efficient (correct) algorithm. Efficiency towards run time costs (# fundamental operations) or /and memory consumption.

94

Maximum Subarray Problem

Given: an array of n real numbers (a1, . . . , an). Wanted: interval [i, j], 1 ≤ i ≤ j ≤ n with maximal positive sum

j

k=i ak.

Example: a = (7, −11, 15, 110, −23, −3, 127, −12, 1)

1 2 3 4 5 6 7 8 9 50 100

  • k ak = max

95

Naive Maximum Subarray Algorithm

Input: A sequence of n numbers (a1, a2, . . . , an) Output: I, J such that J

k=I ak maximal.

M ← 0; I ← 1; J ← 0 for i ∈ {1, . . . , n} do for j ∈ {i, . . . , n} do m = j

k=i ak

if m > M then M ← m; I ← i; J ← j return I, J

96

slide-10
SLIDE 10

Analysis

Theorem The naive algorithm for the Maximum Subarray problem executes

Θ(n3) additions.

Beweis:

n

  • i=1

n

  • j=i

(j − i + 1) =

n

  • i=1

n−i

  • j=0

(j + 1) =

n

  • i=1

n−i+1

  • j=1

j =

n

  • i=1

(n − i + 1)(n − i + 2) 2 =

n

  • i=0

i · (i + 1) 2 = 1 2 n

  • i=1

i2 +

n

  • i=1

i

  • = 1

2 n(2n + 1)(n + 1) 6 + n(n + 1) 2

  • = n3 + 3n2 + 2n

6 = Θ(n3).

  • 97

Observation

j

  • k=i

ak =

  • j
  • k=1

ak

  • Sj

− i−1

  • k=1

ak

  • Si−1

Prefix sums

Si :=

i

  • k=1

ak.

98

Maximum Subarray Algorithm with Prefix Sums

Input: A sequence of n numbers (a1, a2, . . . , an) Output: I, J such that J

k=J ak maximal.

S0 ← 0 for i ∈ {1, . . . , n} do // prefix sum Si ← Si−1 + ai M ← 0; I ← 1; J ← 0 for i ∈ {1, . . . , n} do for j ∈ {i, . . . , n} do m = Sj − Si−1 if m > M then M ← m; I ← i; J ← j

99

Analysis

Theorem The prefix sum algorithm for the Maximum Subarray problem conducts Θ(n2) additions and subtractions. Beweis:

n

  • i=1

1 +

n

  • i=1

n

  • j=i

1 = n +

n

  • i=1

(n − i + 1) = n +

n

  • i=1

i = Θ(n2)

  • 100
slide-11
SLIDE 11

divide et impera

Divide and Conquer Divide the problem into subproblems that contribute to the simplified computation of the overal problem. Solution

S2 S22 S21 S1 S12 S11

Problem P

P1 P11 P12 P2 P21 P22

101

Maximum Subarray – Divide

Divide: Divide the problem into two (roughly) equally sized halves:

(a1, . . . , an) = (a1, . . . , a⌊n/2⌋, a⌊n/2⌋+1, . . . , a1)

Simplifying assumption: n = 2k for some k ∈ ◆.

102

Maximum Subarray – Conquer

If i and j are indices of a solution ⇒ case by case analysis:

1

Solution in left half 1 ≤ i ≤ j ≤ n/2 ⇒ Recursion (left half)

2

Solution in right half n/2 < i ≤ j ≤ n ⇒ Recursion (right half)

3

Solution in the middle 1 ≤ i ≤ n/2 < j ≤ n ⇒ Subsequent observation

(1) (2) (3) 1 n/2 n/2 + 1 n

103

Maximum Subarray – Observation

Assumption: solution in the middle 1 ≤ i ≤ n/2 < j ≤ n

Smax = max

1≤i≤n/2 n/2<j≤n j

  • k=i

ak = max

1≤i≤n/2 n/2<j≤n

 

n/2

  • k=i

ak +

j

  • k=n/2+1

ak   = max

1≤i≤n/2 n/2

  • k=i

ak + max

n/2<j≤n j

  • k=n/2+1

ak = max

1≤i≤n/2 Sn/2 − Si−1

  • suffix sum

+ max

n/2<j≤n Sj − Sn/2

  • prefix sum

104

slide-12
SLIDE 12

Maximum Subarray Divide and Conquer Algorithm

Input: A sequence of n numbers (a1, a2, . . . , an) Output: Maximal j′

k=i′ ak.

if n = 1 then return max{a1, 0} else Divide a = (a1, . . . , an) in A1 = (a1, . . . , an/2) und A2 = (an/2+1, . . . , an) Recursively compute best solution W1 in A1 Recursively compute best solution W2 in A2 Compute greatest suffix sum S in A1 Compute greatest prefix sum P in A2 Let W3 ← S + P return max{W1, W2, W3}

105

Analysis

Theorem The divide and conquer algorithm for the maximum subarray sum problem conducts a number of Θ(n log n) additions and comparisons.

106

Analysis

Input: A sequence of n numbers (a1, a2, . . . , an) Output: Maximal j′

k=i′ ak.

if n = 1 then return max{a1, 0} else Divide a = (a1, . . . , an) in A1 = (a1, . . . , an/2) und A2 = (an/2+1, . . . , an) Recursively compute best solution W1 in A1 Recursively compute best solution W2 in A2 Compute greatest suffix sum S in A1 Compute greatest prefix sum P in A2 Let W3 ← S + P return max{W1, W2, W3} Θ(1) Θ(1) Θ(1) Θ(1) Θ(n) Θ(n) T(n/2) T(n/2)

107

Analysis

Recursion equation

T(n) =

  • c

if n = 1

2T(n

2) + a · n

if n > 1

108

slide-13
SLIDE 13

Analysis

Mit n = 2k:

T(k) =

  • c

if k = 0

2T(k − 1) + a · 2k

if k > 0 Solution:

T(k) = 2k · c +

k−1

  • i=0

2i · a · 2k−i = c · 2k + a · k · 2k = Θ(k · 2k)

also

T(n) = Θ(n log n)

  • 109

Maximum Subarray Sum Problem – Inductively

Assumption: maximal value Mi−1 of the subarray sum is known for

(a1, . . . , ai−1) (1 < i ≤ n).

Mi−1 Ri−1 1 i − 1 i n

scan

ai: generates at most a better interval at the right bound (prefix sum). Ri−1 ⇒ Ri = max{Ri−1 + ai, 0}

110

Inductive Maximum Subarray Algorithm

Input: A sequence of n numbers (a1, a2, . . . , an). Output: max{0, maxi,j j

k=i ak}.

M ← 0 R ← 0 for i = 1 . . . n do R ← R + ai if R < 0 then R ← 0 if R > M then M ← R return M;

111

Analysis

Theorem The inductive algorithm for the Maximum Subarray problem conducts a number of Θ(n) additions and comparisons.

112

slide-14
SLIDE 14

Complexity of the problem?

Can we improve over Θ(n)? Every correct algorithm for the Maximum Subarray Sum problem must consider each element in the algorithm. Assumption: the algorithm does not consider ai.

1 The algorithm provides a solution including ai. Repeat the

algorithm with ai so small that the solution must not have contained the point in the first place.

2 The algorithm provides a solution not including ai. Repeat the

algorithm with ai so large that the solution must have contained the point in the first place.

113

Complexity of the maximum Subarray Sum Problem

Theorem The Maximum Subarray Sum Problem has Complexity Θ(n). Beweis: Inductive algorithm with asymptotic execution time O(n). Every algorithm has execution time Ω(n). Thus the complexity of the problem is Ω(n) ∩ O(n) = Θ(n).

  • 114

3.4 Appendix

Derivation of some mathemmatical formulas

115

Sums

n

  • i=0

i2 = n · (n + 1) · (2n + 1) 6

Trick:

n

  • i=1

i3 − (i − 1)3 =

n

  • i=0

i3 −

n−1

  • i=0

i3 = n3

n

  • i=1

i3 − (i − 1)3 =

n

  • i=1

i3 − i3 + 3i2 − 3i + 1 = n − 3 2n · (n + 1) + 3

n

  • i=0

i2 ⇒

n

  • i=0

i2 = 1 6(2n3 + 3n2 + n) ∈ Θ(n3)

Can easily be generalized: n

i=1 ik ∈ Θ(nk+1).

116

slide-15
SLIDE 15

Geometric Series

n

  • i=0

ρi

!

= 1 − ρn+1 1 − ρ

n

  • i=0

ρi · (1 − ̺) =

n

  • i=0

ρi −

n

  • i=0

ρi+1 =

n

  • i=0

ρi −

n+1

  • i=1

ρi = ρ0 − ρn+1 = 1 − ρn+1.

For 0 ≤ ρ < 1:

  • i=0

ρi = 1 1 − ρ

117