CSE 373: Minimum Spanning Trees: Prim and Kruskal Michael Lee - - PowerPoint PPT Presentation

cse 373 minimum spanning trees prim and kruskal
SMART_READER_LITE
LIVE PREVIEW

CSE 373: Minimum Spanning Trees: Prim and Kruskal Michael Lee - - PowerPoint PPT Presentation

CSE 373: Minimum Spanning Trees: Prim and Kruskal Michael Lee Monday, Feb 26, 2018 1 Minimum spanning trees V ( G is spanning ) ...be undirected. E ). vertex. (Note: this means V ...be connected there is a path from a vertex to any other


slide-1
SLIDE 1

CSE 373: Minimum Spanning Trees: Prim and Kruskal

Michael Lee Monday, Feb 26, 2018

1

slide-2
SLIDE 2

Minimum spanning trees

Punchline: a MST of a graph connects all the vertices together while minimizing the number of edges used (and their weights). Minimum spanning trees Given a connected, undirected graph G V E , a minimum spanning tree is a subgraph G V E such that... V V (G is spanning) There exists a path from any vertex to any other one The sum of the edge weights in E is minimized. In order for a graph to have a MST, the graph must... ...be connected – there is a path from a vertex to any other

  • vertex. (Note: this means V

E ). ...be undirected.

2

slide-3
SLIDE 3

Minimum spanning trees

Punchline: a MST of a graph connects all the vertices together while minimizing the number of edges used (and their weights). Minimum spanning trees Given a connected, undirected graph G = (V , E), a minimum spanning tree is a subgraph G′ = (V ′, E ′) such that... V V (G is spanning) There exists a path from any vertex to any other one The sum of the edge weights in E is minimized. In order for a graph to have a MST, the graph must... ...be connected – there is a path from a vertex to any other

  • vertex. (Note: this means V

E ). ...be undirected.

2

slide-4
SLIDE 4

Minimum spanning trees

Punchline: a MST of a graph connects all the vertices together while minimizing the number of edges used (and their weights). Minimum spanning trees Given a connected, undirected graph G = (V , E), a minimum spanning tree is a subgraph G′ = (V ′, E ′) such that... ◮ V = V ′ (G′ is spanning) There exists a path from any vertex to any other one The sum of the edge weights in E is minimized. In order for a graph to have a MST, the graph must... ...be connected – there is a path from a vertex to any other

  • vertex. (Note: this means V

E ). ...be undirected.

2

slide-5
SLIDE 5

Minimum spanning trees

Punchline: a MST of a graph connects all the vertices together while minimizing the number of edges used (and their weights). Minimum spanning trees Given a connected, undirected graph G = (V , E), a minimum spanning tree is a subgraph G′ = (V ′, E ′) such that... ◮ V = V ′ (G′ is spanning) ◮ There exists a path from any vertex to any other one The sum of the edge weights in E is minimized. In order for a graph to have a MST, the graph must... ...be connected – there is a path from a vertex to any other

  • vertex. (Note: this means V

E ). ...be undirected.

2

slide-6
SLIDE 6

Minimum spanning trees

Punchline: a MST of a graph connects all the vertices together while minimizing the number of edges used (and their weights). Minimum spanning trees Given a connected, undirected graph G = (V , E), a minimum spanning tree is a subgraph G′ = (V ′, E ′) such that... ◮ V = V ′ (G′ is spanning) ◮ There exists a path from any vertex to any other one ◮ The sum of the edge weights in E ′ is minimized. In order for a graph to have a MST, the graph must... ...be connected – there is a path from a vertex to any other

  • vertex. (Note: this means V

E ). ...be undirected.

2

slide-7
SLIDE 7

Minimum spanning trees

Punchline: a MST of a graph connects all the vertices together while minimizing the number of edges used (and their weights). Minimum spanning trees Given a connected, undirected graph G = (V , E), a minimum spanning tree is a subgraph G′ = (V ′, E ′) such that... ◮ V = V ′ (G′ is spanning) ◮ There exists a path from any vertex to any other one ◮ The sum of the edge weights in E ′ is minimized. In order for a graph to have a MST, the graph must... ◮ ...be connected – there is a path from a vertex to any other

  • vertex. (Note: this means |V | ≤ |E|).

◮ ...be undirected.

2

slide-8
SLIDE 8

Minimum spanning trees: example

An example of an minimum spanning tree (MST): a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

3

slide-9
SLIDE 9

Minimum spanning trees: Applications

Example questions: ◮ We want to connect phone lines to houses, but laying down cable is expensive. How can we minimize the amount of wire we must install? We have items on a circuit we want to be “electrically equivalent”. How can we connect them together using a minimum amount of wire? Other applications: Implement effjcient multiple constant multiplication Minimizing number of packets transmitted across a network Machine learning (e.g. real-time face verifjcation) Graphics (e.g. image segmentation)

4

slide-10
SLIDE 10

Minimum spanning trees: Applications

Example questions: ◮ We want to connect phone lines to houses, but laying down cable is expensive. How can we minimize the amount of wire we must install? ◮ We have items on a circuit we want to be “electrically equivalent”. How can we connect them together using a minimum amount of wire? Other applications: Implement effjcient multiple constant multiplication Minimizing number of packets transmitted across a network Machine learning (e.g. real-time face verifjcation) Graphics (e.g. image segmentation)

4

slide-11
SLIDE 11

Minimum spanning trees: Applications

Example questions: ◮ We want to connect phone lines to houses, but laying down cable is expensive. How can we minimize the amount of wire we must install? ◮ We have items on a circuit we want to be “electrically equivalent”. How can we connect them together using a minimum amount of wire? Other applications: Implement effjcient multiple constant multiplication Minimizing number of packets transmitted across a network Machine learning (e.g. real-time face verifjcation) Graphics (e.g. image segmentation)

4

slide-12
SLIDE 12

Minimum spanning trees: Applications

Example questions: ◮ We want to connect phone lines to houses, but laying down cable is expensive. How can we minimize the amount of wire we must install? ◮ We have items on a circuit we want to be “electrically equivalent”. How can we connect them together using a minimum amount of wire? Other applications: ◮ Implement effjcient multiple constant multiplication ◮ Minimizing number of packets transmitted across a network ◮ Machine learning (e.g. real-time face verifjcation) ◮ Graphics (e.g. image segmentation)

4

slide-13
SLIDE 13

Minimum spanning trees: properties

Important properties: ◮ A valid MST cannot contain a cycle If we add or remove an edge from an MST, it’s no longer a valid MST for that graph. Adding an edge introduces a cycle; removing an edge means vertices are no longer connected. If there are V vertices, the MST contains exactly V edges. An MST is always a tree. If every edge has a unique weight, there exists a unique MST.

5

slide-14
SLIDE 14

Minimum spanning trees: properties

Important properties: ◮ A valid MST cannot contain a cycle ◮ If we add or remove an edge from an MST, it’s no longer a valid MST for that graph. Adding an edge introduces a cycle; removing an edge means vertices are no longer connected. If there are V vertices, the MST contains exactly V edges. An MST is always a tree. If every edge has a unique weight, there exists a unique MST.

5

slide-15
SLIDE 15

Minimum spanning trees: properties

Important properties: ◮ A valid MST cannot contain a cycle ◮ If we add or remove an edge from an MST, it’s no longer a valid MST for that graph. Adding an edge introduces a cycle; removing an edge means vertices are no longer connected. ◮ If there are |V | vertices, the MST contains exactly |V | − 1 edges. An MST is always a tree. If every edge has a unique weight, there exists a unique MST.

5

slide-16
SLIDE 16

Minimum spanning trees: properties

Important properties: ◮ A valid MST cannot contain a cycle ◮ If we add or remove an edge from an MST, it’s no longer a valid MST for that graph. Adding an edge introduces a cycle; removing an edge means vertices are no longer connected. ◮ If there are |V | vertices, the MST contains exactly |V | − 1 edges. ◮ An MST is always a tree. If every edge has a unique weight, there exists a unique MST.

5

slide-17
SLIDE 17

Minimum spanning trees: properties

Important properties: ◮ A valid MST cannot contain a cycle ◮ If we add or remove an edge from an MST, it’s no longer a valid MST for that graph. Adding an edge introduces a cycle; removing an edge means vertices are no longer connected. ◮ If there are |V | vertices, the MST contains exactly |V | − 1 edges. ◮ An MST is always a tree. ◮ If every edge has a unique weight, there exists a unique MST.

5

slide-18
SLIDE 18

Minimum spanning trees: algorithm

Design question: how would you implement an algorithm to fjnd the MST of some graph, assuming the edges all have the same weight? Hints: Try modifying DFS or BFS. Try using an incremental approach: start with an empty graph, and steadily add nodes and edges.

6

slide-19
SLIDE 19

Minimum spanning trees: algorithm

Design question: how would you implement an algorithm to fjnd the MST of some graph, assuming the edges all have the same weight? Hints: ◮ Try modifying DFS or BFS. Try using an incremental approach: start with an empty graph, and steadily add nodes and edges.

6

slide-20
SLIDE 20

Minimum spanning trees: algorithm

Design question: how would you implement an algorithm to fjnd the MST of some graph, assuming the edges all have the same weight? Hints: ◮ Try modifying DFS or BFS. ◮ Try using an incremental approach: start with an empty graph, and steadily add nodes and edges.

6

slide-21
SLIDE 21

Minimum spanning trees: approach 1, adding nodes

Intuition: We start with an “empty” MST, and steadily grow it. Core algorithm:

  • 1. Start with an arbitrary node.
  • 2. Run either DFS or BFS, storing edges in our stack or queue.
  • 3. As we visit nodes, add each edge we remove to our MST.

7

slide-22
SLIDE 22

Minimum spanning trees: approach 1, adding nodes

Intuition: We start with an “empty” MST, and steadily grow it. Core algorithm:

  • 1. Start with an arbitrary node.
  • 2. Run either DFS or BFS, storing edges in our stack or queue.
  • 3. As we visit nodes, add each edge we remove to our MST.

7

slide-23
SLIDE 23

Minimum spanning trees: approach 1, adding nodes

Intuition: We start with an “empty” MST, and steadily grow it. Core algorithm:

  • 1. Start with an arbitrary node.
  • 2. Run either DFS or BFS, storing edges in our stack or queue.
  • 3. As we visit nodes, add each edge we remove to our MST.

7

slide-24
SLIDE 24

Minimum spanning trees: approach 1, adding nodes

Intuition: We start with an “empty” MST, and steadily grow it. Core algorithm:

  • 1. Start with an arbitrary node.
  • 2. Run either DFS or BFS, storing edges in our stack or queue.
  • 3. As we visit nodes, add each edge we remove to our MST.

7

slide-25
SLIDE 25

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack:

8

slide-26
SLIDE 26

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b), (a, d),

8

slide-27
SLIDE 27

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b), (d, e), (d, f ), (d, g),

8

slide-28
SLIDE 28

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b), (d, e), (d, f ), (g, h), (g, i),

8

slide-29
SLIDE 29

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b), (d, e), (d, f ), (g, h),

8

slide-30
SLIDE 30

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b), (d, e), (d, f ),

8

slide-31
SLIDE 31

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b), (d, e),

8

slide-32
SLIDE 32

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b), (e, c),

8

slide-33
SLIDE 33

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack: (a, b),

8

slide-34
SLIDE 34

Minimum spanning trees: approach 1, adding nodes

An example using a modifjed version of DFS:

a b d c e f g h i

Stack:

8

slide-35
SLIDE 35

Minimum spanning trees: approach 1, adding nodes

What if the edges have difgerent weights? Observation: We solved a similar problem earlier this quarter, when studying shortest path algorithms!

9

slide-36
SLIDE 36

Minimum spanning trees: approach 1, adding nodes

What if the edges have difgerent weights? Observation: We solved a similar problem earlier this quarter, when studying shortest path algorithms!

9

slide-37
SLIDE 37

Interlude: fjnding the shortest path

Review: How do we fjnd the shortest path between two vertices? ◮ If the graph is unweighted: run BFS If the graph is weighted: run Dijkstra’s How does Dijkstra’s algorithm work?

  • 1. Give each vertex v a “cost”: the cost of the shortest-known

path so far between v and the start.

(The cost of a path is the sum of the edge weights in that path)

  • 2. Pick the node with the smallest cost, update adjacent node

costs, repeat

10

slide-38
SLIDE 38

Interlude: fjnding the shortest path

Review: How do we fjnd the shortest path between two vertices? ◮ If the graph is unweighted: run BFS ◮ If the graph is weighted: run Dijkstra’s How does Dijkstra’s algorithm work?

  • 1. Give each vertex v a “cost”: the cost of the shortest-known

path so far between v and the start.

(The cost of a path is the sum of the edge weights in that path)

  • 2. Pick the node with the smallest cost, update adjacent node

costs, repeat

10

slide-39
SLIDE 39

Interlude: fjnding the shortest path

Review: How do we fjnd the shortest path between two vertices? ◮ If the graph is unweighted: run BFS ◮ If the graph is weighted: run Dijkstra’s How does Dijkstra’s algorithm work?

  • 1. Give each vertex v a “cost”: the cost of the shortest-known

path so far between v and the start.

(The cost of a path is the sum of the edge weights in that path)

  • 2. Pick the node with the smallest cost, update adjacent node

costs, repeat

10

slide-40
SLIDE 40

Interlude: fjnding the shortest path

Review: How do we fjnd the shortest path between two vertices? ◮ If the graph is unweighted: run BFS ◮ If the graph is weighted: run Dijkstra’s How does Dijkstra’s algorithm work?

  • 1. Give each vertex v a “cost”: the cost of the shortest-known

path so far between v and the start.

(The cost of a path is the sum of the edge weights in that path)

  • 2. Pick the node with the smallest cost, update adjacent node

costs, repeat

10

slide-41
SLIDE 41

Minimum spanning trees: approach 1, adding nodes

Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: Previously, for Dijkstra’s: The cost of vertex v is the cost of the shortest-known path so far between v and the start Now: The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm.

11

slide-42
SLIDE 42

Minimum spanning trees: approach 1, adding nodes

Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: ◮ Previously, for Dijkstra’s: The cost of vertex v is the cost of the shortest-known path so far between v and the start Now: The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm.

11

slide-43
SLIDE 43

Minimum spanning trees: approach 1, adding nodes

Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: ◮ Previously, for Dijkstra’s: The cost of vertex v is the cost of the shortest-known path so far between v and the start ◮ Now: The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm.

11

slide-44
SLIDE 44

Minimum spanning trees: approach 1, adding nodes

Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: ◮ Previously, for Dijkstra’s: The cost of vertex v is the cost of the shortest-known path so far between v and the start ◮ Now: The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm.

11

slide-45
SLIDE 45

Compare and contrast: Dijkstra vs Prim

Pseudocode for Dijkstra’s algorithm:

def dijkstra(start): backpointers = new SomeDictionary<Vertex, Vertex>() for (v : vertices): set cost(v) to infinity set cost(start) to 0 while (we still have unvisited nodes): current = get next smallest node for (edge : current.getOutEdges()): newCost = min(cost(current) + edge.cost, cost(edge.dst)) update cost(edge.dst) to newCost backpointers.put(edge.dst, edge.src) return backpointers 12

slide-46
SLIDE 46

Compare and contrast: Dijkstra vs Prim

Pseudocode for Prim’s algorithm:

def prim(start): backpointers = new SomeDictionary<Vertex, Vertex>() for (v : vertices): set cost(v) to infinity set cost(start) to 0 while (we still have unvisited nodes): current = get next smallest node for (edge : current.getOutEdges()): newCost = min(edge.cost, cost(edge.dst)) update cost(edge.dst) to newCost backpointers.put(edge.dst, edge.src) return backpointers 13

slide-47
SLIDE 47

Prim’s algorithm: an example

a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

14

slide-48
SLIDE 48

Prim’s algorithm: an example

a ∞ b ∞ c ∞ d ∞ e ∞ f ∞ g ∞ h ∞ i ∞ 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We initially set all costs to ∞, just like with Dijkstra.

14

slide-49
SLIDE 49

Prim’s algorithm: an example

a b ∞ c ∞ d ∞ e ∞ f ∞ g ∞ h ∞ i ∞ 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We pick an arbitrary node to start.

14

slide-50
SLIDE 50

Prim’s algorithm: an example

a b 4 c ∞ d ∞ e ∞ f ∞ g ∞ h 8 i ∞ 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We update the adjacent nodes.

14

slide-51
SLIDE 51

Prim’s algorithm: an example

a b 4 c ∞ d ∞ e ∞ f ∞ g ∞ h 8 i ∞ 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We select the one with the smallest cost.

14

slide-52
SLIDE 52

Prim’s algorithm: an example

a b 4 c 8 d ∞ e ∞ f ∞ g ∞ h 8 i ∞ 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We potentially need to update h and c, but only c changes.

14

slide-53
SLIDE 53

Prim’s algorithm: an example

a b 4 c 8 d ∞ e ∞ f ∞ g ∞ h 8 i ∞ 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We (arbitrarily) pick c.

14

slide-54
SLIDE 54

Prim’s algorithm: an example

a b 4 c 8 d 7 e ∞ f 4 g ∞ h 8 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 ...and update the adjacent nodes. Note that we don’t add the cumulative cost: the cost represents the shortest path to any green node, not to the start.

14

slide-55
SLIDE 55

Prim’s algorithm: an example

a b 4 c 8 d 7 e ∞ f 4 g ∞ h 8 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 i has the smallest cost.

14

slide-56
SLIDE 56

Prim’s algorithm: an example

a b 4 c 8 d 7 e ∞ f 4 g 6 h 7 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We update both unvisited nodes, and modify the edge to h since we now have a better option.

14

slide-57
SLIDE 57

Prim’s algorithm: an example

a b 4 c 8 d 7 e ∞ f 4 g 6 h 7 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 f has the smallest cost.

14

slide-58
SLIDE 58

Prim’s algorithm: an example

a b 4 c 8 d 7 e 10 f 4 g 2 h 7 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 Again, we update the adjacent unvisited nodes.

14

slide-59
SLIDE 59

Prim’s algorithm: an example

a b 4 c 8 d 7 e 10 f 4 g 2 h 7 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 g has the smallest cost.

14

slide-60
SLIDE 60

Prim’s algorithm: an example

a b 4 c 8 d 7 e 10 f 4 g 2 h 1 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We update h again.

14

slide-61
SLIDE 61

Prim’s algorithm: an example

a b 4 c 8 d 7 e 10 f 4 g 2 h 1 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 h has the smallest cost. Note that there nothing to update here.

14

slide-62
SLIDE 62

Prim’s algorithm: an example

a b 4 c 8 d 7 e 10 f 4 g 2 h 1 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 d has the smallest cost.

14

slide-63
SLIDE 63

Prim’s algorithm: an example

a b 4 c 8 d 7 e 9 f 4 g 2 h 1 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 We can update e.

14

slide-64
SLIDE 64

Prim’s algorithm: an example

a b 4 c 8 d 7 e 9 f 4 g 2 h 1 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 e has the smallest cost.

14

slide-65
SLIDE 65

Prim’s algorithm: an example

a b 4 c 8 d 7 e 9 f 4 g 2 h 1 i 2 4 8 8 11 7 4 2 9 14 10 2 1 6 7 There are no more nodes left, so we’re done.

14

slide-66
SLIDE 66

Prim’s algorithm: another example

Now you try. Start on node a. a b c d e 4 1 2 2 4 3 1 5

15

slide-67
SLIDE 67

Prim’s algorithm: another example

Now you try. Start on node a. a b 2 c 2 d 1 e 1 4 1 2 2 4 3 1 5

15

slide-68
SLIDE 68

Analyzing Prim’s algorithm

Question: What is the worst-case asymptotic runtime of Prim’s algorithm? Answer: The same as Dijkstra’s: V ts E tu where... ts time needed to get next smallest node tu time needed to update vertex costs So, V log V E log V if we stick to data structures we know how to implement; V log V E if we use Fibonacci heaps.

16

slide-69
SLIDE 69

Analyzing Prim’s algorithm

Question: What is the worst-case asymptotic runtime of Prim’s algorithm? Answer: The same as Dijkstra’s: O (|V |ts + |E|tu) where... ◮ ts = time needed to get next smallest node ◮ tu = time needed to update vertex costs So, V log V E log V if we stick to data structures we know how to implement; V log V E if we use Fibonacci heaps.

16

slide-70
SLIDE 70

Analyzing Prim’s algorithm

Question: What is the worst-case asymptotic runtime of Prim’s algorithm? Answer: The same as Dijkstra’s: O (|V |ts + |E|tu) where... ◮ ts = time needed to get next smallest node ◮ tu = time needed to update vertex costs So, O (|V | log(|V |) + |E| log(|V |)) if we stick to data structures we know how to implement; O (|V | log(|V |) + |E|) if we use Fibonacci heaps.

16

slide-71
SLIDE 71

Minimum spanning trees, approach 2

Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... Initially place each node into its own MST of size 1 – so, we start with V MSTs in total. Steadily combine together difgerent MSTs until we have just

  • ne left

How? Loop through every single edge, see if we can use it to join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm

17

slide-72
SLIDE 72

Minimum spanning trees, approach 2

Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... ◮ Initially place each node into its own MST of size 1 – so, we start with |V | MSTs in total. Steadily combine together difgerent MSTs until we have just

  • ne left

How? Loop through every single edge, see if we can use it to join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm

17

slide-73
SLIDE 73

Minimum spanning trees, approach 2

Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... ◮ Initially place each node into its own MST of size 1 – so, we start with |V | MSTs in total. ◮ Steadily combine together difgerent MSTs until we have just

  • ne left

How? Loop through every single edge, see if we can use it to join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm

17

slide-74
SLIDE 74

Minimum spanning trees, approach 2

Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... ◮ Initially place each node into its own MST of size 1 – so, we start with |V | MSTs in total. ◮ Steadily combine together difgerent MSTs until we have just

  • ne left

◮ How? Loop through every single edge, see if we can use it to join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm

17

slide-75
SLIDE 75

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-76
SLIDE 76

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-77
SLIDE 77

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-78
SLIDE 78

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-79
SLIDE 79

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-80
SLIDE 80

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-81
SLIDE 81

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-82
SLIDE 82

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-83
SLIDE 83

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-84
SLIDE 84

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-85
SLIDE 85

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-86
SLIDE 86

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-87
SLIDE 87

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-88
SLIDE 88

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-89
SLIDE 89

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-90
SLIDE 90

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-91
SLIDE 91

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-92
SLIDE 92

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-93
SLIDE 93

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-94
SLIDE 94

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-95
SLIDE 95

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-96
SLIDE 96

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-97
SLIDE 97

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-98
SLIDE 98

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-99
SLIDE 99

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-100
SLIDE 100

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-101
SLIDE 101

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-102
SLIDE 102

Kruskal’s algorithm

An example, for unweighted graphs. Note: each MST has a difgerent color.

a b d c e f g h i

18

slide-103
SLIDE 103

Kruskal’s algorithm: weighted graphs

Question: How do we handle edge weights? Answer: Consider edges sorted in ascending order by weight. So, we look at the edge with the smallest weight fjrst, the edge with the second smallest weight next, etc.

19

slide-104
SLIDE 104

Kruskal’s algorithm: weighted graphs

Question: How do we handle edge weights? Answer: Consider edges sorted in ascending order by weight. So, we look at the edge with the smallest weight fjrst, the edge with the second smallest weight next, etc.

19

slide-105
SLIDE 105

Kruskal’s algorithm: pseudocode

Pseudocode for Kruskal’s algorithm:

def kruskal(): mst = new SomeSet<Edge>() for (v : vertices): makeMST(v) sort edges in ascending order by their weight for (edge : edges): if findMST(edge.src) != findMST(edge.dst): union(edge.src, edge.dst) mst.add(edge) return mst

◮ makeMST(v): stores v as a MST containing just one node ◮ findMST(v): fjnds the MST that vertex is a part of ◮ union(u, v): combines the two MSTs of the two given vertices, using the edge (u, v)

20

slide-106
SLIDE 106

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-107
SLIDE 107

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-108
SLIDE 108

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-109
SLIDE 109

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-110
SLIDE 110

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-111
SLIDE 111

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-112
SLIDE 112

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-113
SLIDE 113

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-114
SLIDE 114

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-115
SLIDE 115

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-116
SLIDE 116

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-117
SLIDE 117

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-118
SLIDE 118

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-119
SLIDE 119

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-120
SLIDE 120

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-121
SLIDE 121

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-122
SLIDE 122

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-123
SLIDE 123

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-124
SLIDE 124

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-125
SLIDE 125

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-126
SLIDE 126

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-127
SLIDE 127

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-128
SLIDE 128

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-129
SLIDE 129

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-130
SLIDE 130

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-131
SLIDE 131

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-132
SLIDE 132

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-133
SLIDE 133

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-134
SLIDE 134

Kruskal’s algorithm: example with a weighted graph

Now you try: a b c d e f g h i 4 8 8 11 7 4 2 9 14 10 2 1 6 7

21

slide-135
SLIDE 135

Kruskal’s algorithm: analysis

What is the worst-case runtime?

def kruskal(): mst = new SomeSet<Edge>() for (v : vertices): makeMST(v) sort edges in ascending order by their weight for (edge : edges): if findMST(edge.src) != findMST(edge.dst): union(edge.src, edge.dst) mst.add(edge) return mst

Note: assume that... ◮ makeMST(v) takes O (tm) time ◮ findMST(v): takes O (tf ) time ◮ union(u, v): takes O (tu) time

22

slide-136
SLIDE 136

Kruskal’s algorithm: analysis

◮ Making the |V | MSTs takes O (|V |·tm) time ◮ Sorting the edges takes O (|E|·log(|E|)) time, assuming we use a general-purpose comparison sort ◮ The fjnal loop takes O (|E|·tf + |V |·tu) time Putting it all together: V tm E log E E tf V tu

23

slide-137
SLIDE 137

Kruskal’s algorithm: analysis

◮ Making the |V | MSTs takes O (|V |·tm) time ◮ Sorting the edges takes O (|E|·log(|E|)) time, assuming we use a general-purpose comparison sort ◮ The fjnal loop takes O (|E|·tf + |V |·tu) time Putting it all together: O (|V | · tm + |E|·log(|E|) + |E|·tf + |V |·tu)

23

slide-138
SLIDE 138

The DisjointSet ADT

But wait, what exactly is tm, tf , and tu? How exactly do we implement makeMST(v), findMST(v), and union(u, v)? We can do so using a new ADT called the DisjointSet ADT!

24

slide-139
SLIDE 139

The DisjointSet ADT

But wait, what exactly is tm, tf , and tu? How exactly do we implement makeMST(v), findMST(v), and union(u, v)? We can do so using a new ADT called the DisjointSet ADT!

24