CS 10: Problem solving via Object Oriented Programming Lists Part - - PowerPoint PPT Presentation

cs 10 problem solving via object oriented programming
SMART_READER_LITE
LIVE PREVIEW

CS 10: Problem solving via Object Oriented Programming Lists Part - - PowerPoint PPT Presentation

CS 10: Problem solving via Object Oriented Programming Lists Part 2 (Arrays Revenge!) Agenda 1. Growing array List implementation 2. Orders of growth 3. Asymptotic notation 4. List analysis 5. Iteration 2 Linked lists are a logical


slide-1
SLIDE 1

CS 10: Problem solving via Object Oriented Programming

Lists Part 2 (Array’s Revenge!)

slide-2
SLIDE 2

2

Agenda

  • 1. Growing array List implementation
  • 2. Orders of growth
  • 3. Asymptotic notation
  • 4. List analysis
  • 5. Iteration
slide-3
SLIDE 3

3

Linked lists are a logical choice to implement the List ADT

List ADT features Linked List get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

slide-4
SLIDE 4

4

Linked lists are a logical choice to implement the List ADT

List ADT features Linked List get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

slide-5
SLIDE 5

5

Linked lists are a logical choice to implement the List ADT

List ADT features Linked List get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

slide-6
SLIDE 6

6

At first arrays seem to be a poor choice to implement the List ADT

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

slide-7
SLIDE 7

7

At first arrays seem to be a poor choice to implement the List ADT

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Fast to find element, but

slow once there

  • Have to make (or fill)

hole by copying over No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

slide-8
SLIDE 8

8

At first arrays seem to be a poor choice to implement the List ADT

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Fast to find element, but

slow once there

  • Have to make (or fill)

hole by copying over No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

  • Arrays declared of fixed

size

slide-9
SLIDE 9

9

At first arrays seem to be a poor choice to implement the List ADT

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Fast to find element, but

slow once there

  • Have to make (or fill)

hole by copying over No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

  • Arrays declared of fixed

size

Or is it?

slide-10
SLIDE 10

10

At first arrays seem to be a poor choice to implement the List ADT

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Fast to find element, but

slow once there

  • Have to make (or fill)

hole by copying over No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

  • Arrays declared of fixed

size

slide-11
SLIDE 11

11

Random access aspect of arrays makes it easy to get or set any element

  • Array reserves a contiguous

block of memory

  • Big enough to hold specified

number of elements (10 here) times size of each element (4 bytes for integers) = 40 bytes

  • Indices are 0…9
slide-12
SLIDE 12

12

Random access aspect of arrays makes it easy to get or set any element

1 2 3 4 5 6 7 8 9 Index

slide-13
SLIDE 13

13

Random access aspect of arrays makes it easy to get or set any element

2 1 2 3 4 5 6 7 8 9 Index No need to march down list to get or set element To find element:

  • Start at base address of array (this is

where “numbers” array points)

  • Element at index idx is at address:

base addr + idx*size(element)

slide-14
SLIDE 14

14

Random access aspect of arrays makes it easy to get or set any element

2 1 2 3 4 5 6 7 8 9 Index No need to march down list to get or set element To find element:

  • Start at base address of array (this is

where “numbers” array points)

  • Element at index idx is at address:

base addr + idx*size(element)

  • Index 2 at base addr + 2*4 bytes
  • Time to access element is constant

anywhere in array (just simple math

  • peration to calculate any index)
  • With linked list have to march down

list, takes longer to find elements at end

slide-15
SLIDE 15

15

Random access aspect of arrays makes it easy to get or set any element

2 10 1 2 3 4 5 6 7 8 9 Index

slide-16
SLIDE 16

16

Random access aspect of arrays makes it easy to get or set any element

2 10 1 2 3 4 5 6 7 8 9 Index What values will a, b and c have?

slide-17
SLIDE 17

17

Random access aspect of arrays makes it easy to get or set any element

2 10 1 2 3 4 5 6 7 8 9 Index What values will a, b and c have?

slide-18
SLIDE 18

18

At first arrays seem to be a poor choice to implement the List ADT

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Fast to find element, but

slow once there

  • Have to make (or fill)

hole by copying over No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

  • Arrays declared of fixed

size

slide-19
SLIDE 19

19

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 2 25

  • 8

10 Index 14 Insert 14 at index 2 1 2 3 4 5 6 7 8 9

slide-20
SLIDE 20

20

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 2 25

  • 8

10 Index 14 Insert 14 at index 2

  • Slide indices ≥ idx to the

right to make a hole

  • Copy each element to

next index 1 2 3 4 5 6 7 8 9

slide-21
SLIDE 21

21

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 2 25

  • 8

10 Index 14 Insert 14 at index 2 10

  • Slide indices ≥ idx to the

right to make a hole

  • Copy each element to

next index 1 2 3 4 5 6 7 8 9

slide-22
SLIDE 22

22

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 2 25

  • 8
  • 8

Index 14 Insert 14 at index 2 10

  • Slide indices ≥ idx to the

right to make a hole

  • Copy each element to

next index 1 2 3 4 5 6 7 8 9

slide-23
SLIDE 23

23

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 2 25 25

  • 8

Index 14 Insert 14 at index 2 10

  • Slide indices ≥ idx to the

right to make a hole

  • Copy each element to

next index 1 2 3 4 5 6 7 8 9

slide-24
SLIDE 24

24

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 2 2 25

  • 8

Index 14 Insert 14 at index 2 10

  • Slide indices ≥ idx to the

right to make a hole

  • Copy each element to

next index 1 2 3 4 5 6 7 8 9

slide-25
SLIDE 25

25

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 2 2 25

  • 8

Index 14 Insert 14 at index 2 10 Copy new element into index

  • Slide indices ≥ idx to the

right to make a hole

  • Copy each element to

next index 1 2 3 4 5 6 7 8 9

slide-26
SLIDE 26

26

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 14 2 25

  • 8

Index 10

  • Works, but takes a lot of time (said to be “expensive”)
  • Especially expensive with respect to time if the array is

large and we insert at the front

  • Linked list is slow to find the right place (have to march

down list starting from head), but fast to insert, just update two pointers and you’re done

  • Linked list is fast, however, if only dealing with head
  • With arrays, easy to find right place, but slow afterward

due to copying to make a hole

1 2 3 4 5 6 7 8 9

slide-27
SLIDE 27

27

Because arrays are a contiguous block of memory, hard to insert (except at end)

16 7 14 2 25

  • 8

Index 10

Deleting an element is the same except copy elements to the left to remove the deleted element

1 2 3 4 5 6 7 8 9

slide-28
SLIDE 28

28

At first arrays seem to be a poor choice to implement the List ADT

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Fast to find element, but

slow once there

  • Have to make (or fill)

hole by copying over No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

  • Arrays declared of fixed

size

slide-29
SLIDE 29

29

Arrays are of fixed size, but List ADT allows for growth

16 7 14 2 25

  • 8

Index 10 52

  • 19

6 1 2 3 4 5 6 7 8 9

What do we do when the array is full, but we want to add more elements? Answer: create another, larger array, and copy elements from old array into new array

slide-30
SLIDE 30

30

Arrays are of fixed size, but List ADT allows for growth

Old array New array Grow array

  • 1. Make new array, say 2 times larger than old array
slide-31
SLIDE 31

31

Arrays are of fixed size, but List ADT allows for growth

Old array New array Grow array

  • 1. Make new array, say 2 times larger than old array
  • 2. Copy elements one at a time from old array to new
slide-32
SLIDE 32

32

Arrays are of fixed size, but List ADT allows for growth

Old array New array Grow array

  • 1. Make new array, say 2 times larger than old array
  • 2. Copy elements one at a time from old array to new
slide-33
SLIDE 33

33

Arrays are of fixed size, but List ADT allows for growth

Old array New array Grow array

  • 1. Make new array, say 2 times larger than old array
  • 2. Copy elements one at a time from old array to new
slide-34
SLIDE 34

34

Arrays are of fixed size, but List ADT allows for growth

Old array New array Grow array

  • 1. Make new array, say 2 times larger than old array
  • 2. Copy elements one at a time from old array to new
slide-35
SLIDE 35

35

Arrays are of fixed size, but List ADT allows for growth

array Grow array

  • 1. Make new array, say 2 times larger than old array
  • 2. Copy elements one at a time from old array to new
  • 3. Set instance variable to point at new array (old

array will be garbage collected) Room for more elements

slide-36
SLIDE 36

36

Arrays are of fixed size, but List ADT allows for growth

array Grow array

  • 1. Make new array, say 2 times larger than old array
  • 2. Copy elements one at a time from old array to new
  • 3. Set instance variable to point at new array (old

array will be garbage collected) Room for more elements Growing is expensive operation, but we don’t have to do it frequently if new array size is multiple of old array size

slide-37
SLIDE 37

37

GrowingArray.java: implements List ADT using an array instead of a linked list

Implements SimpleList from last class Array is now the data structure used to store elements in List

  • Array initially sized to 10 Objects (note the

funky Java allocation syntax on line 13, must cast to array of generic type)

  • Remember, arrays are of fixed size, but the

List ADT does not specify a size Track size

slide-38
SLIDE 38

38

GrowingArray.java: get()/set() are easy and fast with an array implementation

Get and set are easy, just make sure index is valid, then return or set item

slide-39
SLIDE 39

39

GrowingArray.java: With growing trick, can implement the List interface with an array

add() makes a new, larger array if needed array.length is how many elements array can hold size has how many elements array does hold

slide-40
SLIDE 40

40

GrowingArray.java: With growing trick, can implement the List interface with an array

add() makes a new, larger array if needed Copy elements one at a time into new array array.length is how many elements array can hold size has how many elements array does hold

slide-41
SLIDE 41

41

GrowingArray.java: With growing trick, can implement the List interface with an array

add() makes a new, larger array if needed Copy elements one at a time into new array Update instance variable to new array array.length is how many elements array can hold size has how many elements array does hold

slide-42
SLIDE 42

42

GrowingArray.java: With growing trick, can implement the List interface with an array

  • Here we know we

have enough room to add a new element

  • Now do insert
  • Start from last item

and copy to one index larger

  • Stop at index idx
  • Set item at idx to item
slide-43
SLIDE 43

43

GrowingArray.java: With growing trick, can implement the List interface with an array

remove() slides elements with index > idx left

slide-44
SLIDE 44

44

It turns out array could be a good choice to implement List ADT, if growing is fast

List ADT features Linked List Array get()/set() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Contiguous block of

memory

  • Random access aspect
  • f arrays makes

get()/set() easy and fast add()/remove() element anywhere in List

  • Start at head and march

down to index in list

  • Slow to find element,

but fast once there

  • Fast to find element, but

slow once there

  • Have to make (or fill)

hole by copying over No limit to number of elements in List

  • Built in feature of how

linked lists work

  • Just create a new

element and splice it in

  • Arrays declared of fixed

size

Can get around array growth limit Want to make sure growth is fast enough

slide-45
SLIDE 45

45

Agenda

  • 1. Growing array List implementation
  • 2. Orders of growth
  • 3. Asymptotic notation
  • 4. List analysis
  • 5. Iteration
slide-46
SLIDE 46

46

Often run-time will depend on the number

  • f elements an algorithm must process

Constant time – does not depend on number of items

  • Returning the first element of a linked list takes a constant amount
  • f time irrespective of the number of elements in the list
  • Just return the head pointer
  • No need to march down list to find the first element (head)
  • Array get() implementation is also constant time (array get() is

constant time everywhere, linked list only constant at head)

Linear time – directly depends on number of items

  • Example: searching for a particular value stored in a list
  • Start at first item, compare value with value trying to find
  • Keep going until find item, or end up at end of list
  • Could get lucky and find item right away, might not find it at all
  • Worst case is we check all n items
slide-47
SLIDE 47

47

Often run-time will depend on the number

  • f elements an algorithm must process

Polynomial time – depends on a function of number of items

  • Example: nested loop in image and graphic methods
  • If changing all pixels in n by n image, must do a total of n2
  • perations because inner and outer loops each run n times
  • Runs slower than a constant or linear time algorithm

Logarithm time – avoids operations on some items

  • Next class we will look at binary search
  • Reduces the number of items algorithm must process (don’t

process all n items)

  • Runs faster than linear or polynomial time (slower than constant)

Exponential time – base raised to power

  • Combination problems: all possible bit combinations in n bits = 2n
  • SLOW!
slide-48
SLIDE 48

48

For small numbers of items, run time does not differ by much

n Number of operations

Logarithm Linear Polynomial Exponential Notice Exponential and Polynomial cross each

  • ther a few

times early on log2n n n2 2n

slide-49
SLIDE 49

49

As n grows, number of operations between different algorithms begins to diverge

n Number of operations

Logarithm Linear Polynomial Exponential After n=4 Exponential is always greater than Polynomial We will use that soon to define n0 (standby for more info) log2n n n2 2n

slide-50
SLIDE 50

50

Even with only 60 items, there is a large difference in number of operations

n Number of operations

Logarithm Linear Polynomial Exponential log2n n n2 2n

slide-51
SLIDE 51

51

Eventually, even with speedy computers, some algorithms become impractical

n Number of operations

log2n n n2 2n Logarithm Linear Polynomial Exponential

slide-52
SLIDE 52

52

Sometimes complexity can hurt us, sometimes it can help us

Hurts us Can’t brute force chess algorithm 2n Helps us Can’t crack password algorithm 2n

Images: thechessstore.com; studyoffice.org

slide-53
SLIDE 53

53

Agenda

  • 1. Growing array List implementation
  • 2. Orders of growth
  • 3. Asymptotic notation
  • 4. List analysis
  • 5. Iteration
slide-54
SLIDE 54

54

Computer scientists describe upper bounds

  • n orders of growth with “Big Oh” notation

O gives an asymptotic upper bounds Run-time complexity is O(n) if there exists constants n0 and c such that:

  • ∀n ≥ n0
  • run time of size n is at

most cn, upper bound

  • O(n) is the worst case

performance for large n, but actual performance could be better

  • O(n) is said to be

“linear” time

  • O(1) means constant

time

Example: find specific item in a list

  • Might find item on first try
  • Might not find it at all (have to

check all n items in list)

  • Worst case (upper bound) is O(n)

“Big Oh of n”, and “Oh of n”, and “order n” all mean the same thing!

slide-55
SLIDE 55

55

We can extend Big Oh to any, not necessarily linear, function

O gives an asymptotic upper bounds Run-time complexity is O(f(n)) if there exists constants n0 and c such that:

  • ∀n ≥ n0
  • run time of size n is at

most cf(n), upper bound

  • O(f(n)) is the worst

case performance for large n, but actual performance could be better

  • f(n) can be a non-

linear function such as n2 or log(n)

  • In that case O(n2) or

O(log n)

slide-56
SLIDE 56

56

We focus on upper bounds (worst case) for a number of reasons

Reasons to focus on worst case

  • Worst case gives upper bound on any input
  • Gives a guarantee that algorithm never takes any longer
  • We don’t need to make an educated guess and hope that

running time never gets much worse

Why not average case instead of worst case?

  • Seems reasonable (sometimes we do)
  • Need to define what is the average case: search example
  • Database might cache popular items, so might find

popular items before obscure items

  • In cases like linear search, might find item half way (n/2)
  • Sometimes never find what you are looking for (n)
  • Average case often about the same as worst case
slide-57
SLIDE 57

57

Run time can also be Ω (Big Omega), where run time grows at least as fast

Ω gives an asymptotic lower bounds Run-time complexity is Ω(f(n)) if there exists constants n0 and c1 such that:

  • ∀n ≥ n0
  • run time of size n is at

least c1f(n), lower bound

  • Ω(n) is the best case

performance for large n, but actual performance can be worse

Example: find largest item in a list

  • Have to check each n items
  • Largest item could be at end of

list, can’t stop early

  • Can’t do better than Ω (n)
slide-58
SLIDE 58

58

We use Θ (Big Theta) for tight bounds when we can define O and Ω

Θ gives an asymptotic tight bounds Run-time complexity is Θ(f(n)) if there exists constants n0 and c1 and c2 such that:

  • ∀n ≥ n0
  • run time of size n is at

least c1f(n) and at most c2f(n)

  • Θ(n) gives a tight

bound, which means run time will be within a constant factor

  • Generally we will use

either O or Θ

  • O, Ω, Θ called

asymptotic notation

Example: find largest item in a list

  • Best case: already seen it is Ω(n)
  • Worst case: have to check each item, so O(n)
  • Because Ω(n) and O(n) we say it is Θ(n)

We can also apply these concepts to how much memory an algorithm uses (not just run-time complexity)

slide-59
SLIDE 59

59

We ignore constants and low-order terms in asymptotic notation

Constants don’t matter, just adjust c1 and c2

  • Constant multiplicative factors are absorbed into c1 (and c2)
  • Example: 1000n2 is O(n2) because we can choose c1 to be 1000

(remember bounded by c1n)

  • Do care in practice – if an operation takes a constant time, O(1),

but more than 24 hours to complete, can’t run it everyday

Low order terms don’t matter either

  • If n2+1000n, then choose c1 = 1, so now n2 +1000n ≥ c1n2
  • Now must find c2 such that n2 +1000n ≤ c2n2
  • Subtract n2 from both sides and get 1000n ≤ c2n2 - n2 = (c2-1)n2
  • Divide both sides by (c2-1)n gives 1000/(c2-1) ≤ n
  • Pick c2 = 2 and n0 = 1000, then ∀n ≥ n0, 1000 ≤ n
  • So, n2 +1000n ≤ c2n2, try with n=1000 get n2 + 10002 = 2*n2
  • In practice, we simply ignore constants and low order terms
slide-60
SLIDE 60

60

Agenda

  • 1. Growing array List implementation
  • 2. Orders of growth
  • 3. Asymptotic notation
  • 4. List analysis
  • 5. Iteration
slide-61
SLIDE 61

61

Growing array is generally preferable to linked list, except maybe growth operation

Linked list Growing array get(i) O(n) O(1) set(i,e) O(n) O(1) add(i,e) O(n) O(n) + growth remove(i) O(n) O(n)

  • Start at head and march down to find

index i

  • Slow to get to index, O(n)
  • Once there, operations are fast O(1)
  • Best case: all operations on head
  • Faster get()/set() than linked list
  • Tie with linked list on remove()
  • Best case: all operation at tail
  • add() might cause expensive

growth operation

  • How should be think about that?

Worst case run-time complexity

slide-62
SLIDE 62

62

Amortized analysis shows growing array is actually only O(1)!

array

Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation

Amortized analysis

n items

slide-63
SLIDE 63

63

Amortized analysis shows growing array is actually only O(1)!

array

Bank 2 n items Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation

slide-64
SLIDE 64

64

Amortized analysis shows growing array is actually only O(1)!

array

Bank 4 n items Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation

slide-65
SLIDE 65

65

Amortized analysis shows growing array is actually only O(1)!

array

Bank 2n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation After n add() operations, array is full, but have 2n tokens in bank n items

slide-66
SLIDE 66

66

Amortized analysis shows growing array is actually only O(1)!

array

Bank 2n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation After n add() operations, array is full, but have 2n tokens in bank Allocate new 2X larger array n items

New array

slide-67
SLIDE 67

67

Amortized analysis shows growing array is actually only O(1)!

array

Bank 2n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation After n add() operations, array is full, but have 2n tokens in bank Allocate new 2X larger array Copy elements from old array to new array n items

New array

slide-68
SLIDE 68

68

Amortized analysis shows growing array is actually only O(1)!

array

Bank 2n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation After n add() operations, array is full, but have 2n tokens in bank Allocate new 2X larger array Copy elements from old array to new array n items

New array

slide-69
SLIDE 69

69

Amortized analysis shows growing array is actually only O(1)!

array

Bank 2n-n = n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation After n add() operations, array is full, but have 2n tokens in bank Allocate new 2X larger array Copy elements from old array to new array Have to copy n items, so charge n pre-paid tokens from bank n items

New array

slide-70
SLIDE 70

70

Amortized analysis shows growing array is actually only O(1)!

array

Bank n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent

growth operation After n add() operations, array is full, but have 2n tokens in bank Allocate new 2X larger array Copy elements from old array to new array Have to copy n items, so charge n pre-paid tokens from bank Remaining n items in bank “pay for” empty n spaces n items

New array

n items

slide-71
SLIDE 71

71

Amortized analysis shows growing array is actually only O(1)!

array

Bank n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent growth
  • peration

After n add() operations, array is full, but have 2n tokens in bank Allocate new 2X larger array Copy elements from old array to new array Have to copy n items, so charge n pre-paid tokens from bank Remaining n items in bank “pay for” empty n spaces Charging a little extra for each add spreads out cost for infrequent growth operation n items

New array

n items

slide-72
SLIDE 72

72

Amortized analysis shows growing array is actually only O(1)!

array

Bank n Each time add an item to array, conceptually charge 3 “tokens”

  • One token pays for current add()
  • Two tokens go into “Bank”
  • We are spread out (amortizing) the cost of the expensive, but infrequent growth
  • peration

After n add() operations, array is full, but have 2n tokens in bank Allocate new 2X larger array Copy elements from old array to new array Have to copy n items, so charge n pre-paid tokens from bank Remaining n items in bank “pay for” empty n spaces Charging a little extra for each add spreads out cost for infrequent growth operation The charge, however, is a constant, so O(3) = O(1) n items

New array

n items

slide-73
SLIDE 73

73

Growing array is generally preferable to linked list

Linked list Growing array get(i) O(n) O(1) set(i,e) O(n) O(1) add(i,e) O(n) O(n) + O(1) = O(n) remove(i) O(n) O(n)

  • Faster get()/set() than linked list
  • Tie with linked list on remove()
  • Best case: all operations on tail
  • add() might cause expensive

growth operation Amortized analysis shows infrequent growth operation is constant time Pay a constant amount more

  • n each add() to pay for the
  • ccasional expensive growth

Worst case run-time complexity

  • Start at head and march down to find

index i

  • Slow to get to index, O(n)
  • Once there, operations are fast O(1)
  • Best case: all operations on head
slide-74
SLIDE 74

74

Agenda

  • 1. Growing array List implementation
  • 2. Orders of growth
  • 3. Asymptotic notation
  • 4. List analysis
  • 5. Iteration
slide-75
SLIDE 75

75

Its so common to march down a list of items that Java makes it easy with iterators

Traditional for loop

for (int i=0; i<blobs.size(); i++) { blobs.get(i).step(); }

Comments

  • i serves no real purpose,

don’t really care what its value is at any point

  • get() is reset every time,

doesn’t keep track of where it was last

  • Could lead to O(n2) on

linked lists

Iterator

For (Blob b : blobs) { b.step(); }

Comments

  • Easier to read?
  • Keeps track of where it left off
  • Implicitly uses iterator
  • Iterator has two main methods:
  • hasNext() can advance?
  • next() do advance
slide-76
SLIDE 76

76

SimpleIterator.java: Defines an iterator interface

Defines interface Only two methods:

  • hasNext()
  • next()
slide-77
SLIDE 77

77

ISinglyLinked.java: Same linked list implementation, now with iterator

  • Private class within ISinglyLinked,

implements SimpleIterator

  • Keeps pointer to current element
  • Initially set to head
  • Implements required interface

methods:

  • hasNext() – true if more items
  • next() – return current item and

move to next item

  • NOTE: there is also an iterator

version for growing arrays

  • Here we use the linked list version,

but both function identically

  • Array version just has int curr

newIterator in ISinglyLinked creates a new iterator

slide-78
SLIDE 78

78

IterTest.java: Use iterator to manipulate list

  • Creates two SimpleILists (remember

I version has iterator, otherwise same as SinglyLinked from last class)

slide-79
SLIDE 79

79

IterTest.java: Use iterator to manipulate list

  • Creates two SimpleILists (remember

I version has iterator, otherwise same as SinglyLinked from last class)

  • Add some elements
slide-80
SLIDE 80

80

IterTest.java: Use iterator to manipulate list

  • Print the elements in the list without

using the iterator

  • This is O(n2), why?
  • Because with the linked list the

get(i) operation has to start at the head and march down i items each time it is called!

  • Sneaky inefficiency!
  • Not a problem for the array

implementation, however

slide-81
SLIDE 81

81

IterTest.java: Use iterator to manipulate list

  • Print elements using iterator is O(n)
  • Why?
  • Iterator keeps track of current

position in list using curr pointer

  • Does not need to start at the head

each time

slide-82
SLIDE 82

82

IterTest.java: Use iterator to manipulate list

I prefer this way: SimpleIterator<String> i = list1.newIterator(); while (i.hasNext()) { System.out.println(i.next()); }

slide-83
SLIDE 83

83