cse 373 minimum spanning trees prim and kruskal
play

CSE 373: Minimum Spanning Trees: Prim and Kruskal Michael Lee - PowerPoint PPT Presentation

CSE 373: Minimum Spanning Trees: Prim and Kruskal Michael Lee Monday, Feb 26, 2018 1 Minimum spanning trees V ( G is spanning ) ...be undirected. E ). vertex. (Note: this means V ...be connected there is a path from a vertex to any other


  1. Minimum spanning trees: approach 1, adding nodes Intuition: We start with an “empty” MST, and steadily grow it. Core algorithm: 1. Start with an arbitrary node. 2. Run either DFS or BFS, storing edges in our stack or queue. 3. As we visit nodes, add each edge we remove to our MST. 7

  2. Minimum spanning trees: approach 1, adding nodes Intuition: We start with an “empty” MST, and steadily grow it. Core algorithm: 1. Start with an arbitrary node. 2. Run either DFS or BFS, storing edges in our stack or queue. 3. As we visit nodes, add each edge we remove to our MST. 7

  3. Minimum spanning trees: approach 1, adding nodes Intuition: We start with an “empty” MST, and steadily grow it. Core algorithm: 1. Start with an arbitrary node. 2. Run either DFS or BFS, storing edges in our stack or queue. 3. As we visit nodes, add each edge we remove to our MST. 7

  4. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i Stack: 8

  5. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) , ( a , d ) ,

  6. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) , ( d , e ) , ( d , f ) , ( d , g ) ,

  7. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) , ( d , e ) , ( d , f ) , ( g , h ) , ( g , i ) ,

  8. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) , ( d , e ) , ( d , f ) , ( g , h ) ,

  9. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) , ( d , e ) , ( d , f ) ,

  10. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) , ( d , e ) ,

  11. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) , ( e , c ) ,

  12. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i 8 Stack: ( a , b ) ,

  13. Minimum spanning trees: approach 1, adding nodes An example using a modifjed version of DFS: a b d c e f g h i Stack: 8

  14. Minimum spanning trees: approach 1, adding nodes What if the edges have difgerent weights? Observation: We solved a similar problem earlier this quarter, when studying shortest path algorithms! 9

  15. Minimum spanning trees: approach 1, adding nodes What if the edges have difgerent weights? Observation: We solved a similar problem earlier this quarter, when studying shortest path algorithms! 9

  16. Interlude: fjnding the shortest path Review: How do we fjnd the shortest path between two vertices? If the graph is weighted: run Dijkstra’s How does Dijkstra’s algorithm work? 1. Give each vertex v a “cost”: the cost of the shortest-known path so far between v and the start. (The cost of a path is the sum of the edge weights in that path) 2. Pick the node with the smallest cost, update adjacent node costs, repeat 10 ◮ If the graph is unweighted: run BFS

  17. Interlude: fjnding the shortest path Review: How do we fjnd the shortest path between two vertices? How does Dijkstra’s algorithm work? 1. Give each vertex v a “cost”: the cost of the shortest-known path so far between v and the start. (The cost of a path is the sum of the edge weights in that path) 2. Pick the node with the smallest cost, update adjacent node costs, repeat 10 ◮ If the graph is unweighted: run BFS ◮ If the graph is weighted: run Dijkstra’s

  18. Interlude: fjnding the shortest path Review: How do we fjnd the shortest path between two vertices? How does Dijkstra’s algorithm work? 1. Give each vertex v a “cost”: the cost of the shortest-known path so far between v and the start. (The cost of a path is the sum of the edge weights in that path) 2. Pick the node with the smallest cost, update adjacent node costs, repeat 10 ◮ If the graph is unweighted: run BFS ◮ If the graph is weighted: run Dijkstra’s

  19. Interlude: fjnding the shortest path Review: How do we fjnd the shortest path between two vertices? How does Dijkstra’s algorithm work? 1. Give each vertex v a “cost”: the cost of the shortest-known path so far between v and the start. (The cost of a path is the sum of the edge weights in that path) 2. Pick the node with the smallest cost, update adjacent node costs, repeat 10 ◮ If the graph is unweighted: run BFS ◮ If the graph is weighted: run Dijkstra’s

  20. Minimum spanning trees: approach 1, adding nodes Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: Previously, for Dijkstra’s: The cost of vertex v is the cost of the shortest-known path so far between v and the start Now: The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm . 11

  21. Minimum spanning trees: approach 1, adding nodes Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: The cost of vertex v is the cost of the shortest-known path so far between v and the start Now: The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm . 11 ◮ Previously, for Dijkstra’s:

  22. Minimum spanning trees: approach 1, adding nodes Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: The cost of vertex v is the cost of the shortest-known path so far between v and the start The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm . 11 ◮ Previously, for Dijkstra’s: ◮ Now:

  23. Minimum spanning trees: approach 1, adding nodes Intuition: We can use the same idea to fjnd a MST! Core idea: Use the exact same algorithm as Dijkstra’s algorithm, but redefjne the cost: The cost of vertex v is the cost of the shortest-known path so far between v and the start The cost of vertex v is the cost of the shortest-known path so far between v and any node we’ve visited so far This algorithm is known as Prim’s algorithm . 11 ◮ Previously, for Dijkstra’s: ◮ Now:

  24. Compare and contrast: Dijkstra vs Prim Pseudocode for Dijkstra’s algorithm: 12 def dijkstra(start): backpointers = new SomeDictionary<Vertex, Vertex>() for (v : vertices): set cost(v) to infinity set cost(start) to 0 while (we still have unvisited nodes): current = get next smallest node for (edge : current.getOutEdges()): newCost = min(cost(current) + edge.cost, cost(edge.dst)) update cost(edge.dst) to newCost backpointers.put(edge.dst, edge.src) return backpointers

  25. Compare and contrast: Dijkstra vs Prim Pseudocode for Prim’s algorithm: 13 def prim(start): backpointers = new SomeDictionary<Vertex, Vertex>() for (v : vertices): set cost(v) to infinity set cost(start) to 0 while (we still have unvisited nodes): current = get next smallest node for (edge : current.getOutEdges()): newCost = min(edge.cost, cost(edge.dst)) update cost(edge.dst) to newCost backpointers.put(edge.dst, edge.src) return backpointers

  26. Prim’s algorithm: an example 11 7 6 1 2 10 14 9 2 4 7 8 a 8 4 i h g f e d c b 14

  27. Prim’s algorithm: an example a 7 6 1 2 10 14 9 2 4 7 11 8 8 4 i 14 d h b g f c e ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ ∞ We initially set all costs to ∞ , just like with Dijkstra.

  28. Prim’s algorithm: an example 4 We pick an arbitrary node to start. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 ∞ ∞ ∞ 0 ∞ ∞ ∞ ∞ ∞

  29. Prim’s algorithm: an example 4 We update the adjacent nodes. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 ∞ ∞ 0 ∞ ∞ ∞ ∞ 8

  30. Prim’s algorithm: an example 4 We select the one with the smallest cost. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 ∞ ∞ 0 ∞ ∞ ∞ ∞ 8

  31. Prim’s algorithm: an example 4 We potentially need to update h and c , but only c changes. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 ∞ 8 0 ∞ ∞ ∞ ∞ 8

  32. Prim’s algorithm: an example 4 We (arbitrarily) pick c . 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 8 ∞ ∞ 0 ∞ ∞ ∞ 8

  33. Prim’s algorithm: an example 14 8 11 7 4 2 9 10 4 2 1 6 7 ...and update the adjacent nodes. Note that we don’t add the cumulative cost: the cost represents the shortest path to any green node, not to the start. 8 14 a f b c d i e h g 4 8 7 2 0 ∞ ∞ 8 4

  34. Prim’s algorithm: an example 4 i has the smallest cost. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 7 8 2 0 ∞ ∞ 8 4

  35. Prim’s algorithm: an example 14 8 11 7 4 2 9 10 4 2 1 6 7 We update both unvisited nodes, and modify the edge to h since we now have a better option. 8 14 a f b c d i e h g 4 8 7 2 0 ∞ 7 6 4

  36. Prim’s algorithm: an example 4 f has the smallest cost. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 8 7 2 0 ∞ 7 6 4

  37. Prim’s algorithm: an example 4 Again, we update the adjacent unvisited nodes. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 8 7 2 0 10 7 2 4

  38. Prim’s algorithm: an example 4 g has the smallest cost. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 8 7 2 0 10 7 2 4

  39. Prim’s algorithm: an example 4 We update h again. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 8 7 2 0 10 1 2 4

  40. Prim’s algorithm: an example 4 h has the smallest cost. Note that there nothing to update here. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 8 7 2 0 10 1 2 4

  41. Prim’s algorithm: an example 4 d has the smallest cost. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 7 8 0 2 10 1 2 4

  42. Prim’s algorithm: an example 4 We can update e . 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 7 8 0 2 9 1 2 4

  43. Prim’s algorithm: an example 4 e has the smallest cost. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 7 8 0 2 9 1 2 4

  44. Prim’s algorithm: an example 4 There are no more nodes left, so we’re done. 7 6 1 2 10 14 9 2 4 7 11 8 8 i a f b c d e g h 14 4 7 8 0 2 9 2 1 4

  45. Prim’s algorithm: another example Now you try. Start on node a . a b c d e 4 1 2 2 4 3 1 5 15

  46. Prim’s algorithm: another example 1 5 1 3 4 2 2 1 4 e Now you try. Start on node a . 1 d 2 c 2 b 0 a 15

  47. E t u where... Analyzing Prim’s algorithm V log V Fibonacci heaps. if we use E V log V we know how to implement; if we stick to data structures E log V So, Question: What is the worst-case asymptotic runtime of Prim’s time needed to update vertex costs t u time needed to get next smallest node t s V t s Answer: The same as Dijkstra’s: algorithm? 16

  48. Analyzing Prim’s algorithm Question: What is the worst-case asymptotic runtime of Prim’s algorithm? So, V log V E log V if we stick to data structures we know how to implement; V log V E if we use Fibonacci heaps. 16 Answer: The same as Dijkstra’s: O ( | V | t s + | E | t u ) where... ◮ t s = time needed to get next smallest node ◮ t u = time needed to update vertex costs

  49. Analyzing Prim’s algorithm Question: What is the worst-case asymptotic runtime of Prim’s algorithm? Fibonacci heaps. 16 Answer: The same as Dijkstra’s: O ( | V | t s + | E | t u ) where... ◮ t s = time needed to get next smallest node ◮ t u = time needed to update vertex costs So, O ( | V | log ( | V | ) + | E | log ( | V | )) if we stick to data structures we know how to implement; O ( | V | log ( | V | ) + | E | ) if we use

  50. Minimum spanning trees, approach 2 Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... Initially place each node into its own MST of size 1 – so, we start with V MSTs in total. Steadily combine together difgerent MSTs until we have just one left How? Loop through every single edge, see if we can use it to join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm 17

  51. Minimum spanning trees, approach 2 Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... Steadily combine together difgerent MSTs until we have just one left How? Loop through every single edge, see if we can use it to join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm 17 ◮ Initially place each node into its own MST of size 1 – so, we start with | V | MSTs in total.

  52. Minimum spanning trees, approach 2 Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... one left How? Loop through every single edge, see if we can use it to join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm 17 ◮ Initially place each node into its own MST of size 1 – so, we start with | V | MSTs in total. ◮ Steadily combine together difgerent MSTs until we have just

  53. Minimum spanning trees, approach 2 Recap: Prim’s algorithm works similarly to Dijkstra’s – we start with a single node, and “grow” our MST. A second approach: instead of “growing” our MST, we... one left join two difgerent MSTs together. This algorithm is called Kruskal’s algorithm 17 ◮ Initially place each node into its own MST of size 1 – so, we start with | V | MSTs in total. ◮ Steadily combine together difgerent MSTs until we have just ◮ How? Loop through every single edge, see if we can use it to

  54. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  55. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  56. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  57. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  58. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  59. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  60. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  61. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  62. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  63. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  64. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  65. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  66. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  67. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  68. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  69. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  70. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  71. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  72. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  73. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  74. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  75. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  76. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  77. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  78. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

  79. Kruskal’s algorithm An example, for unweighted graphs. Note: each MST has a difgerent color. a b d c e f g h i 18

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend