15 150 fall 2020 lecture 7
play

15-150 Fall 2020 Lecture 7 Stephen Brookes last time Sorting a - PowerPoint PPT Presentation

15-150 Fall 2020 Lecture 7 Stephen Brookes last time Sorting a list of integers insertion sort O(n 2 ) for lists of merge sort length n O(n log n) Specifications and proofs helper functions that really help principles


  1. 15-150 Fall 2020 Lecture 7 Stephen Brookes

  2. last time • Sorting a list of integers • insertion sort O(n 2 ) for lists of • merge sort length n O(n log n) • Specifications and proofs • helper functions that really help

  3. principles • Every function needs a spec • Every spec needs a proof • Recursive functions need inductive proofs • Pick an appropriate method... • Choose helper functions wisely ! proof of msort was easy , because of split and merge

  4. the joy of specs • The proof for msort relied only on the specification proven for split and the specification proven for merge • We can replace split and merge by any functions that satisfy the specifications , and the msort proof is still valid!

  5. even more joy • The work analysis for msort relied on the correctness of split and merge and their work • We can replace split and merge by any functions that satisfy the specifications and have the same work , and get a version of msort with the same work as before (asymptotically)!

  6. advantages • These joyful comments are intended to convince you of the advantages of compositional reasoning ! • We can reason about correctness, and analyze efficiency, in a syntax-directed way.

  7. so far • We proved correctness of isort and showed that the work for isort L is O(n 2 ) • We proved correctness of msort and showed that the work for msort L is O(n log n) fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) W split (n) = O(n) W merge (n) = O(n) W msort (n) = O(n) + 2 W msort (n div 2)

  8. can we do better? Q: Would parallel processing be beneficial? A: Find the span for isort L and msort L • If the span is asymptotically better than the work, there’s a potential speed-up

  9. can we do better? Q: Would parallel processing be beneficial? A: Find the span for isort L and msort L • If the span is asymptotically better than the work, there’s a potential speed-up add the work for sub-expressions max the span for independent sub-expressions add the span for dependent sub-expressions

  10. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L)

  11. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L)

  12. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 W ins (n) is O(n) • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L)

  13. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 W ins (n) is O(n) S ins (0) = 1 S ins (n) = S ins (n-1) + 1 • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L)

  14. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 W ins (n) is O(n) S ins (0) = 1 S ins (n) is O(n) S ins (n) = S ins (n-1) + 1 • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L)

  15. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 W ins (n) is O(n) S ins (0) = 1 S ins (n) is O(n) S ins (n) = S ins (n-1) + 1 • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L) W isort (0) = 1 W isort (n) = W isort (n-1) + W ins (n-1) + 1 = O(n) + W isort (n-1)

  16. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 W ins (n) is O(n) S ins (0) = 1 S ins (n) is O(n) S ins (n) = S ins (n-1) + 1 • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L) W isort (0) = 1 W isort (n) = W isort (n-1) + W ins (n-1) + 1 W isort (n) is O(n 2 ) = O(n) + W isort (n-1)

  17. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 W ins (n) is O(n) S ins (0) = 1 S ins (n) is O(n) S ins (n) = S ins (n-1) + 1 • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L) W isort (0) = 1 W isort (n) = W isort (n-1) + W ins (n-1) + 1 W isort (n) is O(n 2 ) = O(n) + W isort (n-1) S isort (0) = 1 S isort (n) = S isort (n-1) + S ins (n-1) + 1 = O(n) + S isort (n-1)

  18. isort • The list ops in ins are sequential fun ins(x, [ ]) = [x] | ins(x, y::L) = if y<x then y::ins(x, L) else x::y::L W ins (0) = 1 W ins (n) = W ins (n-1) + 1 W ins (n) is O(n) S ins (0) = 1 S ins (n) is O(n) S ins (n) = S ins (n-1) + 1 • isort can’t be parallelized - code is dependent fun isort [ ] = [ ] | isort (x::L) = ins(x, isort L) W isort (0) = 1 W isort (n) = W isort (n-1) + W ins (n-1) + 1 W isort (n) is O(n 2 ) = O(n) + W isort (n-1) S isort (0) = 1 S isort (n) = S isort (n-1) + S ins (n-1) + 1 S isort (n) is O(n 2 ) = O(n) + S isort (n-1)

  19. msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) • The list ops in split, merge are sequential • But we could use parallel evaluation for the recursive calls to msort A and msort B How would this affect runtime?

  20. span of msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B)

  21. span of msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) S msort (0) = 1 S msort (1) = 1

  22. span of msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) S msort (0) = 1 S msort (1) = 1 S msort (n) = S split (n) + S msort (n div 2) + S merge (n) + 1

  23. span of msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) S msort (0) = 1 first split, then (parallel) recursive calls, S msort (1) = 1 then merge S msort (n) = S split (n) + S msort (n div 2) + S merge (n) + 1

  24. span of msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) S msort (0) = 1 first split, then (parallel) recursive calls, S msort (1) = 1 then merge S msort (n) = S split (n) + S msort (n div 2) + S merge (n) + 1 for n>1

  25. span of msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) S msort (0) = 1 first split, then (parallel) recursive calls, S msort (1) = 1 then merge S msort (n) = S split (n) + S msort (n div 2) + S merge (n) + 1 for n>1 S msort (n) = O(n) + S msort (n div 2)

  26. span of msort fun msort [ ] = [ ] | msort [x] = [x] | msort L = let val (A, B) = split L in end merge (msort A, msort B) S msort (0) = 1 first split, then (parallel) recursive calls, S msort (1) = 1 then merge S msort (n) = S split (n) + S msort (n div 2) + S merge (n) + 1 for n>1 S msort (n) = O(n) + S msort (n div 2) S msort (n) is O(n)

  27. Deriving the span for msort Simplify recurrence to: S(n) = n + S(n div 2) = n + n/2 + S(n div 4) = n + n/2 + n/4 + S(n div 8) = n + n/2 + n/4 +… + n/2 k where k = log 2 n = n(1 +1/2 + 1/4 + … + 1/2 k ) ≤ 2n This S has same asymptotic behavior as S msort So S msort (n) is O(n)

  28. summary • msort(L) has O(n log n) work, O(n) span • So the potential speed-up factor from parallel evaluation is O(log n) … in principle , we can speed up mergesort on lists by a factor of log n

  29. summary • msort(L) has O(n log n) work, O(n) span • So the potential speed-up factor from parallel evaluation is O(log n) … in principle , we can speed up mergesort on lists by a factor of log n but this would require O(n) parallel processors… expensive!

  30. summary • msort(L) has O(n log n) work, O(n) span • So the potential speed-up factor from parallel evaluation is O(log n) … in principle , we can speed up mergesort on lists by a factor of log n

  31. summary • msort(L) has O(n log n) work, O(n) span • So the potential speed-up factor from parallel evaluation is O(log n) … in principle , we can speed up mergesort on lists by a factor of log n To do any better, we need a different data structure…

  32. summary • msort(L) has O(n log n) work, O(n) span • So the potential speed-up factor from parallel evaluation is O(log n) … in principle , we can speed up mergesort on lists by a factor of log n To do any better, we need a different data structure…

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend