what makes a problem hard
play

What makes a problem hard? Matvey Soloviev (Cornell University) CS - PowerPoint PPT Presentation

What makes a problem hard? Matvey Soloviev (Cornell University) CS 4820, Summer 2020 1 interesting, and in particular faster.) How fast can we solve a problem? For any computational problem like sorting or maxflow, there are multiple


  1. What makes a problem hard? Matvey Soloviev (Cornell University) CS 4820, Summer 2020 1

  2. interesting, and in particular faster.) How fast can we solve a problem? For any computational problem like sorting or maxflow, there are multiple algorithms. People keep coming up with new ones. (Simpler, more 2

  3. How fast can we solve a problem? For any computational problem like sorting or maxflow, there are multiple algorithms. People keep coming up with new ones. (Simpler, more 2 interesting, and in particular faster.)

  4. nm 2 . Examples of different algorithms Maxflow: State of the art is O nm (Orlin 2013(!)+King-Rao-Tarjan ’94) n 2 m . (You’ll see this in CS 6820.) Dinic’s in Edmonds-Karp in . m f Ford-Fulkerson in n , ... n m Fredman-Tarjan (’84) in n , m n Shortest paths: Dijkstra (’56) in many others. 3 Sorting: Bubblesort in Θ( n 2 ) , mergesort in Θ( n log n ) , and

  5. nm 2 . Examples of different algorithms many others. Maxflow: Ford-Fulkerson in m f . Edmonds-Karp in Dinic’s in n 2 m . (You’ll see this in CS 6820.) State of the art is O nm (Orlin 2013(!)+King-Rao-Tarjan ’94) 3 Sorting: Bubblesort in Θ( n 2 ) , mergesort in Θ( n log n ) , and Shortest paths: Dijkstra (’56) in Θ(( n + m ) log n ) , Fredman-Tarjan (’84) in Θ( m + n log n ) , ...

  6. nm 2 . Examples of different algorithms many others. Maxflow: Ford-Fulkerson in m f . Edmonds-Karp in Dinic’s in n 2 m . (You’ll see this in CS 6820.) State of the art is O nm (Orlin 2013(!)+King-Rao-Tarjan ’94) 3 Sorting: Bubblesort in Θ( n 2 ) , mergesort in Θ( n log n ) , and Shortest paths: Dijkstra (’56) in Θ(( n + m ) log n ) , Fredman-Tarjan (’84) in Θ( m + n log n ) , ...

  7. nm 2 . Examples of different algorithms many others. Maxflow: Edmonds-Karp in Dinic’s in n 2 m . (You’ll see this in CS 6820.) State of the art is O nm (Orlin 2013(!)+King-Rao-Tarjan ’94) 3 Sorting: Bubblesort in Θ( n 2 ) , mergesort in Θ( n log n ) , and Shortest paths: Dijkstra (’56) in Θ(( n + m ) log n ) , Fredman-Tarjan (’84) in Θ( m + n log n ) , ... Ford-Fulkerson in Θ( m | f ∗ | ) .

  8. Examples of different algorithms many others. Maxflow: Dinic’s in n 2 m . (You’ll see this in CS 6820.) State of the art is O nm (Orlin 2013(!)+King-Rao-Tarjan ’94) 3 Sorting: Bubblesort in Θ( n 2 ) , mergesort in Θ( n log n ) , and Shortest paths: Dijkstra (’56) in Θ(( n + m ) log n ) , Fredman-Tarjan (’84) in Θ( m + n log n ) , ... Ford-Fulkerson in Θ( m | f ∗ | ) . Edmonds-Karp in Θ( nm 2 ) .

  9. Examples of different algorithms many others. Maxflow: State of the art is O nm (Orlin 2013(!)+King-Rao-Tarjan ’94) 3 Sorting: Bubblesort in Θ( n 2 ) , mergesort in Θ( n log n ) , and Shortest paths: Dijkstra (’56) in Θ(( n + m ) log n ) , Fredman-Tarjan (’84) in Θ( m + n log n ) , ... Ford-Fulkerson in Θ( m | f ∗ | ) . Edmonds-Karp in Θ( nm 2 ) . Dinic’s in Θ( n 2 m ) . (You’ll see this in CS 6820.)

  10. Examples of different algorithms many others. Maxflow: 3 Sorting: Bubblesort in Θ( n 2 ) , mergesort in Θ( n log n ) , and Shortest paths: Dijkstra (’56) in Θ(( n + m ) log n ) , Fredman-Tarjan (’84) in Θ( m + n log n ) , ... Ford-Fulkerson in Θ( m | f ∗ | ) . Edmonds-Karp in Θ( nm 2 ) . Dinic’s in Θ( n 2 m ) . (You’ll see this in CS 6820.) State of the art is O ( nm ) (Orlin 2013(!)+King-Rao-Tarjan ’94)

  11. When to give up? Is there some point at which we can be confident that we have no point in looking for a better one? 4 found the fastest algorithm there is for a problem, and there is

  12. An almost physical statement: compare “if you want to get a When to give up? (2) If yes, it seems reasonable to say that we’d have bumped into something like the intrinsic hardness of the problem. “You can’t solve maxflow faster than in nm time because nm is just a measure of how hard maxflow actually is.” “No matter how you go about solving your maxflow instance, at some point you have to put in nm work.” satellite into orbit...” 5

  13. An almost physical statement: compare “if you want to get a When to give up? (2) If yes, it seems reasonable to say that we’d have bumped into something like the intrinsic hardness of the problem. “No matter how you go about solving your maxflow instance, at some point you have to put in nm work.” satellite into orbit...” 5 “You can’t solve maxflow faster than in Θ( nm ) time because Θ( nm ) is just a measure of how hard maxflow actually is.”

  14. When to give up? (2) If yes, it seems reasonable to say that we’d have bumped into something like the intrinsic hardness of the problem. “No matter how you go about solving your maxflow instance, at satellite into orbit...” 5 “You can’t solve maxflow faster than in Θ( nm ) time because Θ( nm ) is just a measure of how hard maxflow actually is.” some point you have to put in Θ( nm ) work.” An almost physical statement: compare “if you want to get a

  15. How would we go about learning anything about the hardness Tight lower bounds are rare Very few instances where we know that an algorithm we know for a problem is optimal. (Sorting is one of those, for a cool reason: Stirling’s formula says n n n ) of a general problem? 6

  16. How would we go about learning anything about the hardness Tight lower bounds are rare Very few instances where we know that an algorithm we know for a problem is optimal. (Sorting is one of those, for a cool reason: Stirling’s formula of a general problem? 6 says log( n !) = Θ( n log n ) . )

  17. Tight lower bounds are rare Very few instances where we know that an algorithm we know for a problem is optimal. (Sorting is one of those, for a cool reason: Stirling’s formula of a general problem? 6 says log( n !) = Θ( n log n ) . ) How would we go about learning anything about the hardness

  18. nm 2 . Therefore, a n 2 m 2 time to solve” Simple fact about lower bounds Well, for starters: if we do have an algorithm solving a problem than that. Edmonds-Karp solves maxflow in statement like “maxflow takes at least is plainly wrong. 7 in O ( f ) time, we know that the problem can’t be any harder

  19. Simple fact about lower bounds Well, for starters: if we do have an algorithm solving a problem than that. is plainly wrong. 7 in O ( f ) time, we know that the problem can’t be any harder Edmonds-Karp solves maxflow in Θ( nm 2 ) . Therefore, a statement like “maxflow takes at least Θ( n 2 m 2 ) time to solve”

  20. Algorithms calling algorithms Often, an algorithm we write to solve one problem will, in the call out to an algorithm (a subroutine) to solve that subproblem, and do something with the result. Sort some array before doing something with it. For some scheduling problem, construct a flow graph and call a maxflow algorithm. Compute a schedule from the maximum flow and return it. 8 course of it execution, create an instance of another problem,

  21. Algorithms calling algorithms Often, an algorithm we write to solve one problem will, in the call out to an algorithm (a subroutine) to solve that subproblem, and do something with the result. Sort some array before doing something with it. For some scheduling problem, construct a flow graph and call a maxflow algorithm. Compute a schedule from the maximum flow and return it. 8 course of it execution, create an instance of another problem,

  22. Algorithms calling algorithms: Example 8 // Output: a schedule for the n widgets 14 schedule = scheduleFromMaxima(maxima); // O(n) 13 12 } 11 maxima[i]=f; // O(1) 10 9 int f = maxflow(G); // O(?) // G has n vertices and n^2 edges Imagine we have to solve the problem of scheduling widgets, 7 G = createFlowNetwork(widgets, i); // O(n^2) 6 for(int i=0;i<n;++i) { 5 4 sort(widgets); // O(n log n) 3 2 // Input: n widgets to schedule 1 and the following algorithm is provably correct: 9

  23. n 2 work Analysing the example (1) What’s the time complexity of this algorithm? And, what does it say about the hardness of Widget Scheduling? Well, we do n n n work outside the loops, to create a flow network in each of the n iterations of the loop, and we also call an algorithm for maxflow (on n vertices and n 2 edges) n times. 10

  24. Analysing the example (1) What’s the time complexity of this algorithm? And, what does it say about the hardness of Widget Scheduling? to create a flow network in each of the n iterations of the loop, and we also call an algorithm for maxflow (on n vertices and n 2 edges) n times. 10 Well, we do Θ( n log n + n ) work outside the loops, Θ( n 2 ) work

  25. nm 2 ), and we n 6 . n 5 . n 4 . Analysing the example (2) So the complexity depends on the complexity of our maxflow the complexity of the maxflow algorithm. Plug in Edmonds-Karp (where f n m get Plug in Dinic’s ( f n m n 2 m ), and we get The state-of-the-art algorithm gives 11 algorithm: in general, it’s Θ( n 3 + n · f ( n , n 2 )) , where f ( n , m ) is

  26. n 5 . n 4 . Analysing the example (2) So the complexity depends on the complexity of our maxflow the complexity of the maxflow algorithm. Plug in Dinic’s ( f n m n 2 m ), and we get The state-of-the-art algorithm gives 11 algorithm: in general, it’s Θ( n 3 + n · f ( n , n 2 )) , where f ( n , m ) is Plug in Edmonds-Karp (where f ( n , m ) = Θ( nm 2 ) ), and we get Θ( n 6 ) .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend