a
play

A - - PowerPoint PPT Presentation

A - - Heng Guo (University of


  1. Unreliability Nonetheless, naive Monte Carlo (NMC) is the basic building block of the FPRAS by Karger (����) for U������������ (namely 1 − Z rel ). Karger’s algorithm has been subsequently refined by Harris and Srini- vasan (����), Karger (���6, ����). Karger (����) is a recursive algorithm using NMC running in O ( n 2.87 ) . • Run NMC, if p c > 1/2 , where c is the size of the min-cut. • Otherwise, draw subgraphs H 1 , H 2 ∼ G ( q ) where q = 2 − 1/c > p , and 1 2 ( Z unrel ( H 1 , p/q ) + Z unrel ( H 2 , p/q )) is an unbiased estimator of Z unrel ( G, p ) . • Recursively estimate Z unrel ( H i , p/q ) for i = 1, 2 . Similar ideas, once again, fail on graphs as simple as a path for Z rel .

  2. Unreliability Nonetheless, naive Monte Carlo (NMC) is the basic building block of the FPRAS by Karger (����) for U������������ (namely 1 − Z rel ). Karger’s algorithm has been subsequently refined by Harris and Srini- vasan (����), Karger (���6, ����). Karger (����) is a recursive algorithm using NMC running in O ( n 2.87 ) . • Run NMC, if p c > 1/2 , where c is the size of the min-cut. • Otherwise, draw subgraphs H 1 , H 2 ∼ G ( q ) where q = 2 − 1/c > p , and 1 2 ( Z unrel ( H 1 , p/q ) + Z unrel ( H 2 , p/q )) is an unbiased estimator of Z unrel ( G, p ) . • Recursively estimate Z unrel ( H i , p/q ) for i = 1, 2 . Similar ideas, once again, fail on graphs as simple as a path for Z rel .

  3. Reducing counting to sampling (Jerrum, Valiant, and Vazirani, ��86) Let π G ( · ) be the product distribution over the edges, conditioned on the resulting graph being connected. We can approximate Z rel using an oracle drawing from π G . ⇒ ⇒ ⇒ G 0 = G G 1 G 2 G 3

  4. Reducing counting to sampling (Jerrum, Valiant, and Vazirani, ��86) Let π G ( · ) be the product distribution over the edges, conditioned on the resulting graph being connected. We can approximate Z rel using an oracle drawing from π G . ⇒ ⇒ ⇒ G 0 = G G 1 G 2 G 3

  5. Reducing counting to sampling (Jerrum, Valiant, and Vazirani, ��86) Let π G ( · ) be the product distribution over the edges, conditioned on the resulting graph being connected. We can approximate Z rel using an oracle drawing from π G . ⇒ ⇒ ⇒ G 0 = G G 1 G 2 G 3

  6. Reducing counting to sampling (Jerrum, Valiant, and Vazirani, ��86) Let π G ( · ) be the product distribution over the edges, conditioned on the resulting graph being connected. We can approximate Z rel using an oracle drawing from π G . ⇒ ⇒ ⇒ G 0 = G G 1 G 2 G 3

  7. Reducing counting to sampling (Jerrum, Valiant, and Vazirani, ��86) Let π G ( · ) be the product distribution over the edges, conditioned on the resulting graph being connected. We can approximate Z rel using an oracle drawing from π G . ⇒ ⇒ ⇒ G 0 = G G 1 G 2 G 3

  8. Reducing counting to sampling (Jerrum, Valiant, and Vazirani, ��86) Let π G ( · ) be the product distribution over the edges, conditioned on the resulting graph being connected. We can approximate Z rel using an oracle drawing from π G . ⇒ ⇒ ⇒ G 0 = G G 1 G 2 G 3 Rewrite Z rel ( G ) = Z rel ( G 0 ) Z rel ( G 1 ) · Z rel ( G 1 ) Z rel ( G 2 ) · Z rel ( G 2 ) Z rel ( G 3 ) · Z rel ( G 3 ) .

  9. Reducing counting to sampling (Jerrum, Valiant, and Vazirani, ��86) Let π G ( · ) be the product distribution over the edges, conditioned on the resulting graph being connected. We can approximate Z rel using an oracle drawing from π G . ⇒ ⇒ ⇒ G 0 = G G 1 G 2 G 3 Z rel ( G i ) To estimate Z rel ( G i + 1 ) , draw C ∼ π G i + 1 ( · ) and let { with prob. p ; C C ′ := and X := 1 conn, G i ( C ′ ) . otherwise , C ∪ { e } Then E X = Z rel ( G i + 1 ) and its variance is bounded by a polynomial. Z rel ( G i )

  10. Markov chain Monte Carlo There is a natural Markov chain converging to π G ( · ) : �. Let C 0 = E . �. Given C t , randomly pick an edge e ∈ E . If C t \ { e } is disconnected then C t + 1 = C t . Otherwise, { with prob. 1 − p ; C t ∪ { e } C t + 1 = with prob. p. C t \ { e } Unfortunately, nothing is known about its mixing time (rate of conver- gence).

  11. A ���������� ����������� (��� �� ����������� ��� �� ��������)

  12. Reachability We say a directed graph D with root r is root-connected if all vertices can reach r . r r r Root-connected Root-connected Not root-connected! R�����������: in a directed graph D = ( V, A ) with root r , what’s the probability that D ( p ) is root-connected? ∑ p | A \ R | ( 1 − p ) | R | . Z reach ( D, p ) := R ⊆ A :( V, R ) is root-connected

  13. A surprising equivalence Ball (��8�) showed that for any undirected graph G = ( V, E ) , Z rel ( G, p ) = Z reach ( − → G, p ) , where − → G is the directed graph obtained by replacing every e ∈ E with a pair of anti-parallel arcs. (Called bi-directed). − → G r G Thus we just need to approximate ������������ in bi-directed graphs.

  14. A coupling proof We have an alternative coupling proof of Ball’s equivalence: There is a coupling C under which G ( p ) is connected ⇔ − → G ( p ) is root-connected. Explore G and − → G like a BFS, starting from r . Reveal − → G ( p ) and G ( p ) as the process proceeds. Couple the arc going towards the current vertex in − → G ( p ) with the corresponding edge in G ( p ) . − → G G r r u u When both exploration processes end, the sets of vertices that can reach r are exactly the same.

  15. A coupling proof We have an alternative coupling proof of Ball’s equivalence: There is a coupling C under which G ( p ) is connected ⇔ − → G ( p ) is root-connected. Explore G and − → G like a BFS, starting from r . Reveal − → G ( p ) and G ( p ) as the process proceeds. Couple the arc going towards the current vertex in − → G ( p ) with the corresponding edge in G ( p ) . − → G G r r u u When both exploration processes end, the sets of vertices that can reach r are exactly the same.

  16. A coupling proof We have an alternative coupling proof of Ball’s equivalence: There is a coupling C under which G ( p ) is connected ⇔ − → G ( p ) is root-connected. Explore G and − → G like a BFS, starting from r . Reveal − → G ( p ) and G ( p ) as the process proceeds. Couple the arc going towards the current vertex in − → G ( p ) with the corresponding edge in G ( p ) . − → G G r r u u When both exploration processes end, the sets of vertices that can reach r are exactly the same.

  17. Cluster-popping Goal: sample uniform (or edge-weighted) root-connected subgraphs. Gorodezky and Pak (����) proposed the “cluster-popping” algorithm: (Cluster: a subset of vertices not including r and with no arc going out.) �. Let R be a subset of arcs by choosing each arc e with probability 1 − p independently. �. While there is at least one cluster in ( V, R ) : • Let C 1 , . . . , C k be all minimal clusters in ( V, R ) , and C = ∪ k i = 1 C i . • Re-randomize all arcs whose heads are in C to get a new R . Gorodezky and Pak (����) showed that this algorithm draws from the correct distribution, and they also conjectured that cluster-popping runs in expected polynomial time in bi-directed graphs.

  18. Cluster-popping Goal: sample uniform (or edge-weighted) root-connected subgraphs. Gorodezky and Pak (����) proposed the “cluster-popping” algorithm: (Cluster: a subset of vertices not including r and with no arc going out.) �. Let R be a subset of arcs by choosing each arc e with probability 1 − p independently. �. While there is at least one cluster in ( V, R ) : • Let C 1 , . . . , C k be all minimal clusters in ( V, R ) , and C = ∪ k i = 1 C i . • Re-randomize all arcs whose heads are in C to get a new R . Gorodezky and Pak (����) showed that this algorithm draws from the correct distribution, and they also conjectured that cluster-popping runs in expected polynomial time in bi-directed graphs.

  19. Cluster-popping Goal: sample uniform (or edge-weighted) root-connected subgraphs. Gorodezky and Pak (����) proposed the “cluster-popping” algorithm: (Cluster: a subset of vertices not including r and with no arc going out.) �. Let R be a subset of arcs by choosing each arc e with probability 1 − p independently. �. While there is at least one cluster in ( V, R ) : • Let C 1 , . . . , C k be all minimal clusters in ( V, R ) , and C = ∪ k i = 1 C i . • Re-randomize all arcs whose heads are in C to get a new R . Gorodezky and Pak (����) showed that this algorithm draws from the correct distribution, and they also conjectured that cluster-popping runs in expected polynomial time in bi-directed graphs.

  20. Cluster-popping Goal: sample uniform (or edge-weighted) root-connected subgraphs. Gorodezky and Pak (����) proposed the “cluster-popping” algorithm: (Cluster: a subset of vertices not including r and with no arc going out.) �. Let R be a subset of arcs by choosing each arc e with probability 1 − p independently. �. While there is at least one cluster in ( V, R ) : • Let C 1 , . . . , C k be all minimal clusters in ( V, R ) , and C = ∪ k i = 1 C i . • Re-randomize all arcs whose heads are in C to get a new R . Gorodezky and Pak (����) showed that this algorithm draws from the correct distribution, and they also conjectured that cluster-popping runs in expected polynomial time in bi-directed graphs.

  21. Cluster-popping Goal: sample uniform (or edge-weighted) root-connected subgraphs. Gorodezky and Pak (����) proposed the “cluster-popping” algorithm: (Cluster: a subset of vertices not including r and with no arc going out.) �. Let R be a subset of arcs by choosing each arc e with probability 1 − p independently. �. While there is at least one cluster in ( V, R ) : • Let C 1 , . . . , C k be all minimal clusters in ( V, R ) , and C = ∪ k i = 1 C i . • Re-randomize all arcs whose heads are in C to get a new R . Gorodezky and Pak (����) showed that this algorithm draws from the correct distribution, and they also conjectured that cluster-popping runs in expected polynomial time in bi-directed graphs.

  22. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  23. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  24. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  25. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  26. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  27. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  28. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  29. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  30. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  31. Mapping back to connected subgraph. (Exploration order: left to right, bottom to top) An example run Cluster-popping: repeatedly resample minimal clusters. r

  32. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  33. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  34. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  35. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  36. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  37. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  38. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  39. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  40. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  41. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  42. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  43. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  44. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  45. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  46. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  47. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  48. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  49. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  50. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  51. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  52. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  53. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  54. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  55. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  56. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  57. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  58. An example run Cluster-popping: repeatedly resample minimal clusters. r Mapping back to connected subgraph. (Exploration order: left to right, bottom to top)

  59. P������ ��������� �������� (A ������� ������ ������ �������-�������)

  60. Partial rejection sampling Cluster-popping falls into the ������� ��������� �������� framework (G., Jerrum, and Liu, ����). The goal is to sample from a product distribution, conditioned on a num- ber of “bad” events not happening. Rejection sampling throws away all variables. Instead, we want to recycle some randomness while resampling the “bad” events (and hopefully not too much more).

  61. Partial rejection sampling Cluster-popping under partial rejection sampling: Arcs are variables. Minimal clusters are “bad” events. r There can be exponentially many bad events.

  62. Extremal instances An instance is called extremal (in the sense of Shearer (��8�) regarding non-uniform Lovász Local Lemma) : if any two “bad” events A i and A j are either independent or disjoint. r If the instance is extremal, then eliminating precisely the “bad” events in each iteration yields the correct distribution once halt (GJL ’��)!

  63. Extremal instances An instance is called extremal (in the sense of Shearer (��8�) regarding non-uniform Lovász Local Lemma) : if any two “bad” events A i and A j are either independent or disjoint. r If the instance is extremal, then eliminating precisely the “bad” events in each iteration yields the correct distribution once halt (GJL ’��)!

  64. Extremal instances An instance is called extremal (in the sense of Shearer (��8�) regarding non-uniform Lovász Local Lemma) : if any two “bad” events A i and A j are either independent or disjoint. r If the instance is extremal, then eliminating precisely the “bad” events in each iteration yields the correct distribution once halt (GJL ’��)!

  65. Extremal instances An instance is called extremal (in the sense of Shearer (��8�) regarding non-uniform Lovász Local Lemma) : if any two “bad” events A i and A j are either independent or disjoint. r If the instance is extremal, then eliminating precisely the “bad” events in each iteration yields the correct distribution once halt (GJL ’��)!

  66. Extremal instances An instance is called extremal (in the sense of Shearer (��8�) regarding non-uniform Lovász Local Lemma) : if any two “bad” events A i and A j are either independent or disjoint. r If the instance is extremal, then eliminating precisely the “bad” events in each iteration yields the correct distribution once halt (GJL ’��)!

  67. Resampling table Associate an infinite stack X i,0 , X i,1 , . . . to each random variable X i . When we need to resample, draw the next value in the stack. . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  68. Resampling table Associate an infinite stack X i,0 , X i,1 , . . . to each random variable X i . When we need to resample, draw the next value in the stack. . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  69. Resampling table Associate an infinite stack X i,0 , X i,1 , . . . to each random variable X i . When we need to resample, draw the next value in the stack. . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  70. Resampling table Associate an infinite stack X i,0 , X i,1 , . . . to each random variable X i . When we need to resample, draw the next value in the stack. . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  71. Resampling table Associate an infinite stack X i,0 , X i,1 , . . . to each random variable X i . When we need to resample, draw the next value in the stack. . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  72. Resampling table Associate an infinite stack X i,0 , X i,1 , . . . to each random variable X i . When we need to resample, draw the next value in the stack. . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  73. For any output and , there is a bijection between trajectories leading to and . Change the future, not the history For extremal instances, replacing a perfect assignment with another one will not change the resampling history! . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  74. For any output and , there is a bijection between trajectories leading to and . Change the future, not the history For extremal instances, replacing a perfect assignment with another one will not change the resampling history! . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 A 1 . . . X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 A 2 . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4

  75. For any output and , there is a bijection between trajectories leading to and . Change the future, not the history For extremal instances, replacing a perfect assignment with another one will not change the resampling history! X ′ . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 1,0 X ′ . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 2,1 A 1 . . . X ′ X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 3,2 A 2 X ′ . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4 4,1

  76. Change the future, not the history For extremal instances, replacing a perfect assignment with another one will not change the resampling history! X ′ . . . X 1 X 1,0 X 1,1 X 1,2 X 1,3 X 1,4 1,0 X ′ . . . X 2 X 2,0 X 2,1 X 2,2 X 2,3 X 2,4 2,1 A 1 . . . X ′ X 3 X 3,0 X 3,1 X 3,2 X 3,3 X 3,4 3,2 A 2 X ′ . . . X 4 X 4,0 X 4,1 X 4,2 X 4,3 X 4,4 4,1 For any output σ and τ , there is a bijection between trajectories leading to σ and τ .

  77. Partial Rejection Sampling vs Markov chains Markov chain is a random walk in the solution space. (The solution space has to be connected, and the mixing time is not easy to analyze.)

  78. Partial Rejection Sampling vs Markov chains PRS is a local search on the whole space. σ

  79. Partial Rejection Sampling vs Markov chains PRS is a local search on the whole space. (Ergodicity is not an issue.) σ

  80. Partial Rejection Sampling vs Markov chains PRS is a local search on the whole space. (Correctness guaranteed by the bijection. Exact formula for its running time on extremal instances.) σ τ

  81. Run-time analysis Theorem ( G., Jerrum, and Liu, ���� ) Under Shearer’s condition, for extremal instances, E T = total weight of one-flaw assignments total weight of perfect assignments . (Shearer (��8�) has shown a sufficient condition to guarantee the existence of one perfect assignment, which is optimal for Lovász Local Lemma.) The upper bound is shown by Kolipaka and Szegedy (����).

  82. Back to cluster-popping Cluster-popping: repeatedly resample minimal clusters. Let Ω k be the set of subgraphs with k minimal clusters, and Then, E T = Z 1 ∑ p | E \ S | ( 1 − p ) | S | . Z k := . Z 0 S ∈ Ω k Lemma ( G. and Jerrum, ���8 ) p For bi-directed graphs, Z 1 ⩽ 1 − p · mnZ 0 . We show this by designing an injective mapping Ω 1 → Ω 0 × V × E .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend