Big-Oh 1 asymptotic growth rate or order compare two functions, but - - PowerPoint PPT Presentation

big oh
SMART_READER_LITE
LIVE PREVIEW

Big-Oh 1 asymptotic growth rate or order compare two functions, but - - PowerPoint PPT Presentation

Big-Oh 1 asymptotic growth rate or order compare two functions, but ignore constant factors, small inputs example: ; grows faster eventually much bigger than goal: predict behavior here ignore behavior here 2 asymptotic growth rate


slide-1
SLIDE 1

Big-Oh

1

slide-2
SLIDE 2

asymptotic growth rate or order

compare two functions, but… ignore constant factors, small inputs example: ;

grows faster — eventually much bigger than

goal: predict behavior here ignore behavior here

2

slide-3
SLIDE 3

asymptotic growth rate or order

compare two functions, but… ignore constant factors, small inputs example: f(n) = 1 000 000 · n2; g(n) = 2n

g grows faster — eventually much bigger than f

goal: predict behavior here ignore behavior here

2

slide-4
SLIDE 4

asymptotic growth rate or order

compare two functions, but… ignore constant factors, small inputs example: f(n) = 1 000 000 · n2; g(n) = 2n

g grows faster — eventually much bigger than f

5 10 15 20 25 30 35 0.5 1 1.5 2 ·109 goal: predict behavior here ignore behavior here

2

slide-5
SLIDE 5

asymptotic growth rate or order

compare two functions, but… ignore constant factors, small inputs example: f(n) = 1 000 000 · n2; g(n) = 2n

g grows faster — eventually much bigger than f

5 10 15 20 25 30 35 0.5 1 1.5 2 ·109 goal: predict behavior here ignore behavior here

2

slide-6
SLIDE 6

asymptotic growth rate or order

compare two functions, but… ignore constant factors, small inputs example: f(n) = 1 000 000 · n2; g(n) = 2n

g grows faster — eventually much bigger than f

5 10 15 20 25 30 35 0.5 1 1.5 2 ·109 goal: predict behavior here ignore behavior here

2

slide-7
SLIDE 7

preview: what functions?

example: comparing sorting algorithms runtime = f(size of input)

e.g. seconds to sort = f(number of elements in list) e.g. # operations to sort = f(number of elements in list)

space = f(size of input)

e.g. number of bytes of memory = f(number of elements in list)

3

slide-8
SLIDE 8

theory, not empirical

yes, you can make guesses about big-oh behavior from measurements but, no, graphs = big-oh comparison

what happens further to the right? might not have tested big enough

want to write down formula example: summing a list of items:

exactly addition operations assume each one takes unit of time runtime

4

slide-9
SLIDE 9

theory, not empirical

yes, you can make guesses about big-oh behavior from measurements but, no, graphs = big-oh comparison

what happens further to the right? might not have tested big enough

want to write down formula example: summing a list of n items:

exactly n addition operations assume each one takes k unit of time runtime = f(n) = kn

4

slide-10
SLIDE 10

recall: comparing list data structures

List benchmark (from intro slides) w/ 100000 elements

Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large…

  • thers seem to remain manageable

problem: growth rate of runtimes with list size for Vector (unsorted), ArrayList, LinkedList… # operations grows like where is list size for HashSet… # operations per search/remove is constant (sort of) for TreeSet, sorted Vector… # operations per search grows like where is list size

5

slide-11
SLIDE 11

recall: comparing list data structures

List benchmark (from intro slides) w/ 100000 elements

Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large…

  • thers seem to remain manageable

problem: growth rate of runtimes with list size for Vector (unsorted), ArrayList, LinkedList… # operations grows like where is list size for HashSet… # operations per search/remove is constant (sort of) for TreeSet, sorted Vector… # operations per search grows like where is list size

5

slide-12
SLIDE 12

recall: comparing list data structures

List benchmark (from intro slides) w/ 100000 elements

Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large…

  • thers seem to remain manageable

problem: growth rate of runtimes with list size for Vector (unsorted), ArrayList, LinkedList… # operations grows like where is list size for HashSet… # operations per search/remove is constant (sort of) for TreeSet, sorted Vector… # operations per search grows like where is list size

5

slide-13
SLIDE 13

recall: comparing list data structures

List benchmark (from intro slides) w/ 100000 elements

Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large…

  • thers seem to remain manageable

problem: growth rate of runtimes with list size for Vector (unsorted), ArrayList, LinkedList… # operations grows like where is list size for HashSet… # operations per search/remove is constant (sort of) for TreeSet, sorted Vector… # operations per search grows like where is list size

5

slide-14
SLIDE 14

recall: comparing list data structures

List benchmark (from intro slides) w/ 100000 elements

Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large…

  • thers seem to remain manageable

problem: growth rate of runtimes with list size for Vector (unsorted), ArrayList, LinkedList… # operations grows like n where n is list size for HashSet… # operations per search/remove is constant (sort of) for TreeSet, sorted Vector… # operations per search grows like where is list size

5

slide-15
SLIDE 15

recall: comparing list data structures

List benchmark (from intro slides) w/ 100000 elements

Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large…

  • thers seem to remain manageable

problem: growth rate of runtimes with list size for Vector (unsorted), ArrayList, LinkedList… # operations grows like where is list size for HashSet… # operations per search/remove is constant (sort of) for TreeSet, sorted Vector… # operations per search grows like where is list size

5

slide-16
SLIDE 16

recall: comparing list data structures

List benchmark (from intro slides) w/ 100000 elements

Data structure Total Insert Search Delete Vector 87.818 0.004 63.202 24.612 s ArrayList 87.192 0.010 62.470 24.712 s LinkedList 263.776 0.006 196.550 67.439 s HashSet 0.029 0.022 0.003 0.004 s TreeSet 0.134 0.110 0.017 0.007 s Vector, sorted 2.642 0.009 0.024 2.609 s some runtimes get really big as size gets large…

  • thers seem to remain manageable

problem: growth rate of runtimes with list size for Vector (unsorted), ArrayList, LinkedList… # operations grows like where is list size for HashSet… # operations per search/remove is constant (sort of) for TreeSet, sorted Vector… # operations per search grows like log(n) where n is list size

5

slide-17
SLIDE 17

why asymptotic analysis?

“can my program work when data gets big?”

website gets thousands of new users? text editor opening 1MB book? 1 GB log fjle? music player sees 1 000 song collection? 50 000? text search on 100 petabyte copy of the text of the web?

if asymptotic analysis says “no”

can fjnd out before implementing algorithm won’t be fjxed by, e.g., buying a faster CPU

6

slide-18
SLIDE 18

why asymptotic analysis?

“can my program work when data gets big?”

website gets thousands of new users? text editor opening 1MB book? 1 GB log fjle? music player sees 1 000 song collection? 50 000? text search on 100 petabyte copy of the text of the web?

if asymptotic analysis says “no”

can fjnd out before implementing algorithm won’t be fjxed by, e.g., buying a faster CPU

6

slide-19
SLIDE 19

sets of functions

defjne sets of functions based on an example f Ω(f): grow no slower than f (“≥ f”) O(f): grow no faster than f (“≤ f”) Θ(f) = Ω(f) ∩ O(f): grow as fast as f (“= f”) examples:

— ignore constant factor, etc.

and and 7

slide-20
SLIDE 20

sets of functions

defjne sets of functions based on an example f Ω(f): grow no slower than f (“≥ f”) O(f): grow no faster than f (“≤ f”) Θ(f) = Ω(f) ∩ O(f): grow as fast as f (“= f”) examples:

n3 ∈ Ω(n2) 100n ∈ O(n2) 10n2 + n ∈ Θ(n2) — ignore constant factor, etc.

and 10n2 + n ∈ O(n2) and 10n2 + n ∈ Ω(n2) 7

slide-21
SLIDE 21

what are we measuring

f(n) = worst case running time n = input size — as a positive integer

will comapre to another function example: (or )

informally: “ is big-oh of ”

example

  • r (

)

informally: “ ‘ is not big-omega of ”

8

slide-22
SLIDE 22

what are we measuring

f(n) = worst case running time n = input size — as a positive integer

will comapre f to another function g(n) example: f(n) ∈ O(g(n)) (or f ∈ O(g))

informally: “f is big-oh of g”

example f(n) ∈ Ω(g(n)) or (g ∈ Ω(g))

informally: “f‘ is not big-omega of g”

8

slide-23
SLIDE 23

what are we measuring

f(n) = worst case running time n = input size — as a positive integer

will comapre f to another function g(n) example: f(n) ∈ O(g(n)) (or f ∈ O(g))

informally: “f is big-oh of g”

example f(n) ∈ Ω(g(n)) or (g ∈ Ω(g))

informally: “f‘ is not big-omega of g”

8

slide-24
SLIDE 24

worst case?

this class: almost always worst cases intuition: detect if program will ever take “forever” example: iterating through an array until we fjnd a value

best case: look at one value, it’s the one we want worst case: look at every value, none of them are what we want

is run time of slowest input of size

9

slide-25
SLIDE 25

worst case?

this class: almost always worst cases intuition: detect if program will ever take “forever” example: iterating through an array until we fjnd a value

best case: look at one value, it’s the one we want worst case: look at every value, none of them are what we want

is run time of slowest input of size

9

slide-26
SLIDE 26

worst case?

this class: almost always worst cases intuition: detect if program will ever take “forever” example: iterating through an array until we fjnd a value

best case: look at one value, it’s the one we want worst case: look at every value, none of them are what we want

f(n) is run time of slowest input of size n

9

slide-27
SLIDE 27

formal defjnitions

f(n) ∈ O(g(n)): there exists c > 0 and n0 > 0 such that for all n > n0, f(n) ≤ c · g(n) : there exists and such that for all , : and

10

slide-28
SLIDE 28

formal defjnitions

f(n) ∈ O(g(n)): there exists c > 0 and n0 > 0 such that for all n > n0, f(n) ≤ c · g(n) f(n) ∈ Ω(g(n)): there exists c > 0 and n0 > 0 such that for all n > n0, f(n) ≥ c · g(n) : and

10

slide-29
SLIDE 29

formal defjnitions

f(n) ∈ O(g(n)): there exists c > 0 and n0 > 0 such that for all n > n0, f(n) ≤ c · g(n) f(n) ∈ Ω(g(n)): there exists c > 0 and n0 > 0 such that for all n > n0, f(n) ≥ c · g(n) f(n) ∈ Θ(g(n)): f(n) ∈ O(g(n)) and f(n) ∈ Ω(g(n))

10

slide-30
SLIDE 30

formal defjnition example (1)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is n ∈ O(n2):

choose , for : Yes!

11

slide-31
SLIDE 31

formal defjnition example (1)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is n ∈ O(n2):

choose c = 1, n0 = 2 for n > 2 = n0: n ≤ c · n2 = n2 Yes!

11

slide-32
SLIDE 32

formal defjnition example (2)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is 10n ∈ O(n)?

choose , for : Yes!

don’t need to choose smallest possible

12

slide-33
SLIDE 33

formal defjnition example (2)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is 10n ∈ O(n)?

choose c = 11, n0 = 2 for n > 2 = n0: f(n) = 10n ≤ c · g(n) = 11n Yes!

don’t need to choose smallest possible

12

slide-34
SLIDE 34

formal defjnition example (2)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is 10n ∈ O(n)?

choose c = 11, n0 = 2 for n > 2 = n0: f(n) = 10n ≤ c · g(n) = 11n Yes!

don’t need to choose smallest possible c

12

slide-35
SLIDE 35

negating formal defjnitions

f ∈ O(g): there exists c, n0 > 0 so for all n > n0: f(n) ≤ cg(n) f ∈ O(g):

there does not exist c, n0 > 0 so for all n > n0: f(n) ≤ cg(n) for all c, n0, there exists n > n0: f(n) > cg(n)

13

slide-36
SLIDE 36

formal defjnition example (3)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is n2 ∈ O(n)?

no — consider any consider so can’t fjnd that sastisfy defjnition (i.e. )

alternative

14

slide-37
SLIDE 37

formal defjnition example (3)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is n2 ∈ O(n)?

no — consider any c, n0 > 0 consider nbad = (c + 100)(n0 + 100) > n0 n2

bad = (c + 100)2(n0 + 100)2 > c(c + 100)(n0 + 100) = cnbad

so can’t fjnd c, n0 that sastisfy defjnition (i.e. f(n) = n2

bad ≤ c · g(nbad) = cnbad)

alternative

14

slide-38
SLIDE 38

formal defjnition example (3)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 Is n2 ∈ O(n)?

no — consider any c, n0 > 0 consider nbad = (c + 100)(n0 + 100) > n0 n2

bad = (c + 100)2(n0 + 100)2 > c(c + 100)(n0 + 100) = cnbad

so can’t fjnd c, n0 that sastisfy defjnition (i.e. f(n) = n2

bad ≤ c · g(nbad) = cnbad)

alternative

nbad = max{c + 100, n0 + 1} > n0

14

slide-39
SLIDE 39

formal defjnition example (4)

f(n) ∈ O(g(n)) if and only if there exists c > 0 and n0 > 0 such that f(n) ≤ c · g(n) for all n > n0 consider: f(n) = 100 · n2 + n, g(n) = n2:

choose c = 200, n0 = 2

  • bserve for n > 2: 100n2 + n ≤ 101n2

for n > 2 = n0: f(n) = 100n2 + n ≤ 101n2 ≤ c · g(n) = 200n2

15

slide-40
SLIDE 40

big-oh proofs generally

if proving yes case:

look at inequality choose a large enough c and n0 that it’s defjnitely true don’t bother fjnding smallest c, n0 that work

if proving no case:

game: given c, n0 fjnd counter example general idea: choose n > n0 using a formula based on c show that this n never satisfjes the inequality don’t bother showing it’s true for all n′ > n don’t bother fjnding smallest n that works

16

slide-41
SLIDE 41

aside: forall/exists

∀n > 0: for all n > 0 ∃n < 0: there exists an n < 0

17

slide-42
SLIDE 42

defjnition consequences

If f ∈ O(h) and g ∈ O(h), which are true?

  • 1. ∀m > 0, f(m) < g(m)

for all m, f is less than g

  • 2. ∃m > 0, f(m) < g(m)

there exists an m, so f is less than g

  • 3. ∃m0 > 0, ∀m > m0, f(m) < g(m)

there exists an m0, so for all m larger, f is less than g

  • 4. 1 and 2
  • 5. 2 and 3
  • 6. 1 and 2 and 3

18

slide-43
SLIDE 43

defjnition consequences

If f ∈ O(h) and g ∈ O(h), which are true?

  • 1. ∀m > 0, f(m) < g(m)

for all m, f is less than g

  • 2. ∃m > 0, f(m) < g(m)

there exists an m, so f is less than g

  • 3. ∃m0 > 0, ∀m > m0, f(m) < g(m)

there exists an m0, so for all m larger, f is less than g

  • 4. 1 and 2
  • 5. 2 and 3
  • 6. 1 and 2 and 3

19

slide-44
SLIDE 44

f ∈ O(h), g ∈ O(h)

  • =

⇒ ∀m.f(m) < g(m)

counterexample — f(n) = 5n; g(n) = n3; h(n) = n2

f ∈ O(h): 5n ≤ cn2 for all n > n0 with c = 6, n0 = 2 g ∈ O(h): n3 ≤ cn2? use n ≈ cn0 as counterexample

m = 2: f(m) = 10 < g(m) = 8 intuition: big-oh ignores behavior for small

20

slide-45
SLIDE 45

f ∈ O(h), g ∈ O(h)

  • =

⇒ ∀m.f(m) < g(m)

counterexample — f(n) = 5n; g(n) = n3; h(n) = n2

f ∈ O(h): 5n ≤ cn2 for all n > n0 with c = 6, n0 = 2 g ∈ O(h): n3 ≤ cn2? use n ≈ cn0 as counterexample

m = 2: f(m) = 10 < g(m) = 8 intuition: big-oh ignores behavior for small n

20

slide-46
SLIDE 46

n3 ∈ O(n2)

big-Oh defjnition requires: n3 ≤ cn2 for all n > n0 choose any c > 1 and n0 > 1, then n = cn0 is a counterexample n3 = c3n3

0 = cn0(cn0)2 > cn2

contradicting the defjnition

(and for c < 1, use n = n0 + 1, etc.)

21

slide-47
SLIDE 47

f ∈ O(h), g ∈ O(h) = ⇒ ∃m.f(m) < g(m)

intuition: should be true for ‘big enough’ m assume defjnition of big-Oh:

f ∈ O(h): ∀n > n0 : f(n) ≤ ch(n) (for a n0, c > 0) g ∈ O(h): ∃n > n0 : g(n) > ch(n) (for any n0, c > 0)

assume f’s n0, c use the n that must exist for g (from defjnition)

22

slide-48
SLIDE 48

f ∈ O(h), g ∈ O(h) = ⇒ ?∃m0∀m > m0.f(m) < g(m) intuitively, seems so g must grow faster than f — for big m:

f(m) < c1 · h(m) g(m) < c2 · h(m)

but some corner case counterexamples:

f(n) = n g(n) =

  

1 n odd n2 n even h(n) = n

true with additional restriction:

f, g monotonic (g(n) ≤ g(n + 1), etc.)

23

slide-49
SLIDE 49

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) and

  • verlap

— tight bound (“ ”) — and (“little-oh”)— strict upper bound (all ); (versus : ) — strict lower bound (all ); (versus : )

24

slide-50
SLIDE 50

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

O(n2) O — upper bound (“≤”) — lower bound (“ ”) and

  • verlap

— tight bound (“ ”) — and (“little-oh”)— strict upper bound (all ); (versus : ) — strict lower bound (all ); (versus : )

24

slide-51
SLIDE 51

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) Ω(n2) Ω — lower bound (“≥”) and

  • verlap

— tight bound (“ ”) — and (“little-oh”)— strict upper bound (all ); (versus : ) — strict lower bound (all ); (versus : )

24

slide-52
SLIDE 52

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) O(n2) Ω(n2) O and Ω overlap — tight bound (“ ”) — and (“little-oh”)— strict upper bound (all ); (versus : ) — strict lower bound (all ); (versus : )

24

slide-53
SLIDE 53

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) O(n2) Ω(n2) and

  • verlap

Θ(n2) Θ — tight bound (“=”) — O and Ω (“little-oh”)— strict upper bound (all ); (versus : ) — strict lower bound (all ); (versus : )

24

slide-54
SLIDE 54

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) and

  • verlap

Θ(n2) Θ — tight bound (“=”) — O and Ω (“little-oh”)— strict upper bound (all ); (versus : ) — strict lower bound (all ); (versus : )

24

slide-55
SLIDE 55

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) O(n2) Ω(n2) and

  • verlap

— tight bound (“ ”) — and

  • (n2)

g ∈ o(f) (“little-oh”)— strict upper bound f(n) < c · g(n) (all c); (versus O(f): f(n) ≤ c · g(n)) — strict lower bound (all ); (versus : )

24

slide-56
SLIDE 56

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) and

  • verlap

— tight bound (“ ”) — and

  • (n2)

g ∈ o(f) (“little-oh”)— strict upper bound f(n) < c · g(n) (all c); (versus O(f): f(n) ≤ c · g(n)) — strict lower bound (all ); (versus : )

24

slide-57
SLIDE 57

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) O(n2) Ω(n2) and

  • verlap

— tight bound (“ ”) — and (“little-oh”)— strict upper bound (all ); (versus : ) ω(n2) g ∈ ω(f) — strict lower bound f(n) > c · g(n) (all c); (versus Ω(f): f(n) ≥ c · g(n))

24

slide-58
SLIDE 58

function hierarchy

1 4n n + log(n) 3n2 + n 100n2 + n1.9 n2.5 5n3 + n2

— upper bound (“ ”) — lower bound (“ ”) and

  • verlap

— tight bound (“ ”) — and (“little-oh”)— strict upper bound (all ); (versus : ) ω(n2) g ∈ ω(f) — strict lower bound f(n) > c · g(n) (all c); (versus Ω(f): f(n) ≥ c · g(n))

24

slide-59
SLIDE 59

big-Oh variants

O(f) asymptotically less than or equal to f

  • (f)

asymptotically less than f Ω(f) asymptotically greater than or equal to f ω(f) asymptotically greater than f Θ(f) asymptotically equal to f

25

slide-60
SLIDE 60

limit-based defjnition

lim sup n→∞ f(n) g(n) = X if only if…

X < ∞: f ∈ O(g) X > 0: f ∈ Ω(g) 0 < X < ∞: f ∈ Θ(g) X = 0: f ∈ o(g) X = ∞ (and lim inf): f ∈ ω(g)

26

slide-61
SLIDE 61

limit-based defjnition

lim sup n→∞ f(n) g(n) = X if only if…

X < ∞: f ∈ O(g) X > 0: f ∈ Ω(g) 0 < X < ∞: f ∈ Θ(g) X = 0: f ∈ o(g) X = ∞ (and lim inf): f ∈ ω(g)

26

slide-62
SLIDE 62

lim sup?

lim sup f(n) g(n) — “limit superior”

equal to normal lim if it is defjned

  • nly care about upper bound

e.g. n2 in f(n) =

    

1 n odd n2 n even usually glossed over (including in Bloomfjeld’s/Floryan’s slides from prior semesters)

27

slide-63
SLIDE 63

some big-Oh properties (1)

for O and Ω and Θ: O(f + g) = O(max(f, g)) f ∈ O(g) and g ∈ O(h) = ⇒ f ∈ O(h)

also holds for o (little-oh), ω

f ∈ O(f)

28

slide-64
SLIDE 64

some big-Oh properties (2)

f ∈ O(g) ↔ g ∈ Ω(f) f ∈ Θ(g) ↔ g ∈ Θ(f)

does not hold for O, Ω, etc.

Θ is an equivalence relation

refmexive, transitive, etc.

29

slide-65
SLIDE 65

a note on =

informally, sometimes people write 5n2 = O(n2) not very precise — O is a set of functions

30

slide-66
SLIDE 66

selected asymptotic relationships

for k > 0, l > 0, c > 1, ǫ > 0: nk ∈ o(cnl) (polynomial always smaller than exponential) nk ∈ o(nk log n) (adding log makes something bigger) logk(n) ∈ Θ(logl(n)) (all log bases are the same) nk + cnk−1 ∈ Θ(nk) (only polynomial degree matters)

31

slide-67
SLIDE 67

a note on logs

logk(n) = logl(n) logl(k) = c · logl(n) therefore Θ(logk(n)) = Θ(logl(n)) …so doesn’t matter which base of log we mean

32

slide-68
SLIDE 68

some names

Θ(1) — constant (some fjxed maximum)

read kth element of array

Θ(log n) — logarithmic

binary search a sorted array

Θ(n) — linear

searching an unsorted array

Θ(n log n) — log-linear

sorting an array by comparing elements

Θ(n2) — quadratic Θ(n3) — cubic Θ(2n), Θ(cn) — exponential

33

slide-69
SLIDE 69

34

slide-70
SLIDE 70

35

slide-71
SLIDE 71

36

slide-72
SLIDE 72

37

slide-73
SLIDE 73

38

slide-74
SLIDE 74

big-oh rules of thumb (1)

for (int i = 0; i < N; ++i) foo();

runtime ∈ Θ(N × (runtime of foo))

time to increment ? “constant factor” ignored by

for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar();

runtime ∈ Θ(N × (M × runtime of bar))

nested loops — work inside out fjnd time of inner loop (“foo”) multiply by iterations of outer loop

for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo();

runtime ∈ Θ

N

  • i=0

i × runtime of foo

  • = Θ(N 2 · runtime of foo)

at least iterations with at least calls to foo also calls # calls to foo is

39

slide-75
SLIDE 75

big-oh rules of thumb (1)

for (int i = 0; i < N; ++i) foo();

runtime ∈ Θ(N × (runtime of foo))

time to increment i? “constant factor” ignored by Θ

for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar();

runtime ∈ Θ(N × (M × runtime of bar))

nested loops — work inside out fjnd time of inner loop (“foo”) multiply by iterations of outer loop

for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo();

runtime ∈ Θ

N

  • i=0

i × runtime of foo

  • = Θ(N 2 · runtime of foo)

at least iterations with at least calls to foo also calls # calls to foo is

39

slide-76
SLIDE 76

big-oh rules of thumb (1)

for (int i = 0; i < N; ++i) foo();

runtime ∈ Θ(N × (runtime of foo))

time to increment ? “constant factor” ignored by

for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar();

runtime ∈ Θ(N × (M × runtime of bar))

nested loops — work inside out fjnd time of inner loop (“foo”) multiply by iterations of outer loop

for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo();

runtime ∈ Θ

N

  • i=0

i × runtime of foo

  • = Θ(N 2 · runtime of foo)

at least iterations with at least calls to foo also calls # calls to foo is

39

slide-77
SLIDE 77

big-oh rules of thumb (1)

for (int i = 0; i < N; ++i) foo();

runtime ∈ Θ(N × (runtime of foo))

time to increment ? “constant factor” ignored by

for (int i = 0; i < N; ++i) for (int j = 0; j < M; ++j) bar();

runtime ∈ Θ(N × (M × runtime of bar))

nested loops — work inside out fjnd time of inner loop (“foo”) multiply by iterations of outer loop

for (int i = 0; i < N; ++i) for (int j = 0; j < i; ++j) foo();

runtime ∈ Θ

N

  • i=0

i × runtime of foo

  • = Θ(N 2 · runtime of foo)

at least N/2 iterations with at least N/2 calls to foo = ⇒ N/2 · N/2 = N 2/4 also ≤ N · N = N 2 calls = ⇒ # calls to foo is Θ(N 2)

39

slide-78
SLIDE 78

big-oh rules of thumb (2)

foo(); bar();

runtime = runtime of foo + runtime of bar ∈ Θ(max{foo runtime, bar runtime})

if (quux()) { foo(); } else { bar(); }

runtime ≈ runtime of quux + max(runtime of foo, runtime of bar) (max because we measure the worst-case)

40

slide-79
SLIDE 79

Θ(1): constant time

constant time (Θ(1) time) — runtime does not depend on input accessing an array element linked list insert/delete (at known end) getting a vector’s size …

41

slide-80
SLIDE 80

is that really constant time

is getting vector’s size really constant time? vector stores its size, but, for, e.g. N = 210000, the size itself is huge

  • ur usual assumption:

treat “sensible” integer arithmetic as constant time (anything we’d keep in a long or smaller variable in practice?)

can do other analysis, but uncommon

e.g. “bit complexity” — number of single bit operations

42

slide-81
SLIDE 81

Θ(log n): logarithmic time

binary search of sorted array

search space cut in half each iteration — ⌈log2 N⌉ iterations

balanced tree search/insert

height of tree (somehow) gaurenteed to be Θ(log N)

43

slide-82
SLIDE 82

Θ(n): linear

constant # operations/element printing a list search in unsorted array search in linked list doubling the size of a vector unbalanced binary search tree fjnd/insert

44

slide-83
SLIDE 83

Θ(n log n): log-linear

fast comparison-based sorting

merge sort, heap sort, …

quicksort if pivot choices are good inserting n elements into a balanced tree

45

slide-84
SLIDE 84

Θ(n2): quadratic

slow comparison-based sorting

insertion sort, bubble sort, selection sort, …

quicksort if pivot choices are bad most doubly nested for loops that go up to n

46

slide-85
SLIDE 85

Θ(2nc), c ≥ 1: exponential

n-bit solution; try every 2n of the possiblities crack a combination lock by trying every possiblity fjnding the best move in an N × N Go game (with Japanese rules) checking satisfjablity of Boolean expression* the Traveling Salesman problem*

*known algorithms — maybe can do better?

47

slide-86
SLIDE 86

more?

Θ(n3) — fjnd shortest paths between all pairs of n nodes on a fully-connected graph

  • approx. order 2n1/3 — best known integer factorization algorithm

48