permanent estimators via random matrices
play

Permanent estimators via random matrices Mark Rudelson joint work - PowerPoint PPT Presentation

Permanent estimators via random matrices Mark Rudelson joint work with Ofer Zeitouni Department of Mathematics University of Michigan Mark Rudelson (Michigan) Permanent estimators via random matrices 1 / 25 Permanent of a matrix Let A be an


  1. Permanent estimators via random matrices Mark Rudelson joint work with Ofer Zeitouni Department of Mathematics University of Michigan Mark Rudelson (Michigan) Permanent estimators via random matrices 1 / 25

  2. Permanent of a matrix Let A be an n × n matrix with a i , j ≥ 0. Permanent of A : n � � perm ( A ) = a j ,π ( j ) . π ∈ Π n j = 1 Mark Rudelson (Michigan) Permanent estimators via random matrices 2 / 25

  3. Permanent of a matrix Let A be an n × n matrix with a i , j ≥ 0. Permanent of A : Determinant of A : n n � � � � perm ( A ) = a j ,π ( j ) . det ( A ) = sign ( π ) a j ,π ( j ) . π ∈ Π n j = 1 π ∈ Π n j = 1 Mark Rudelson (Michigan) Permanent estimators via random matrices 2 / 25

  4. Permanent of a matrix Let A be an n × n matrix with a i , j ≥ 0. Permanent of A : Determinant of A : n n � � � � perm ( A ) = a j ,π ( j ) . det ( A ) = sign ( π ) a j ,π ( j ) . π ∈ Π n j = 1 π ∈ Π n j = 1 Evaluation of determinants is fast: use e.g., triangularization by Gaussian elimination. Kalfoten–Villard algorithm: Running time: O ( n 2 . 7 ) . Mark Rudelson (Michigan) Permanent estimators via random matrices 2 / 25

  5. Permanent of a matrix Let A be an n × n matrix with a i , j ≥ 0. Permanent of A : Determinant of A : n n � � � � perm ( A ) = a j ,π ( j ) . det ( A ) = sign ( π ) a j ,π ( j ) . π ∈ Π n j = 1 π ∈ Π n j = 1 Evaluation of permanents is Evaluation of determinants is fast: # P -complete (Valiant 1979) use e.g., triangularization by if there exists a polynomial-time Gaussian elimination. algorithm for permanent evaluation, then any # P problem Kalfoten–Villard algorithm: can be solved in polynomial time. Running time: O ( n 2 . 7 ) . Fast computation ⇒ P = NP. Mark Rudelson (Michigan) Permanent estimators via random matrices 2 / 25

  6. Applications of permanents Perfect matchings Let Γ = ( L , R , V ) be an n × n bipartite graph. Mark Rudelson (Michigan) Permanent estimators via random matrices 3 / 25

  7. Applications of permanents Perfect matchings Let Γ = ( L , R , V ) be an n × n bipartite graph. A perfect matching is a bijection τ : E → R such that e → τ ( e ) for all e ∈ E . Mark Rudelson (Michigan) Permanent estimators via random matrices 3 / 25

  8. Applications of permanents Perfect matchings Let Γ = ( L , R , V ) be an n × n bipartite graph. A perfect matching is a bijection τ : E → R such that e → τ ( e ) for all e ∈ E . Mark Rudelson (Michigan) Permanent estimators via random matrices 3 / 25

  9. Applications of permanents Perfect matchings Let Γ = ( L , R , V ) be an n × n bipartite graph. A perfect matching is a bijection τ : E → R such that e → τ ( e ) for all e ∈ E . # (perfect matchings) = perm ( A ) , where A is the adjacency matrix of the graph: a i , j = 1 if i → j . Mark Rudelson (Michigan) Permanent estimators via random matrices 3 / 25

  10. Deterministic bounds Linial–Samorodnitsky–Wigderson algoritm: if perm ( A ) > 0, then one can find in polynomial time diagonal matrices D , D ′ such that the renormalized matrix A ′ = D ′ AD is almost doubly stochastic: n � a ′ 1 − ε < i , j < 1 + ε, for all j = 1 , . . . , n i = 1 � n a ′ 1 − ε < i , j < 1 + ε, for all i = 1 , . . . , n j = 1 Mark Rudelson (Michigan) Permanent estimators via random matrices 4 / 25

  11. Deterministic bounds Linial–Samorodnitsky–Wigderson algoritm: if perm ( A ) > 0, then one can find in polynomial time diagonal matrices D , D ′ such that the renormalized matrix A ′ = D ′ AD is almost doubly stochastic: n � a ′ 1 − ε < i , j < 1 + ε, for all j = 1 , . . . , n i = 1 � n a ′ 1 − ε < i , j < 1 + ε, for all i = 1 , . . . , n j = 1 perm ( A ) = � n i = 1 d i · � n j = 1 d ′ j · perm ( A ′ ) Mark Rudelson (Michigan) Permanent estimators via random matrices 4 / 25

  12. Deterministic bounds Linial–Samorodnitsky–Wigderson algoritm: reduces permanent estimates to almost doubly stochastic matrices Van der Waerden conjecture, proved by Falikman and Egorychev: if A is doubly stochastic, then 1 ≥ perm ( A ) ≥ n ! n n ≈ e − n Linial–Samorodnitsky–Wigderson algorithm estimates the permanent with the multiplicative error at most e n Mark Rudelson (Michigan) Permanent estimators via random matrices 5 / 25

  13. Deterministic bounds Linial–Samorodnitsky–Wigderson algoritm: reduces permanent estimates to almost doubly stochastic matrices Van der Waerden conjecture, proved by Falikman and Egorychev: if A is doubly stochastic, then 1 ≥ perm ( A ) ≥ n ! n n ≈ e − n Linial–Samorodnitsky–Wigderson algorithm estimates the permanent with the multiplicative error at most e n Bregman’s theorem (1973) implies that if A is doubly stochastic, and max a i , j ≤ t · min a i , j , then perm ( A ) ≤ e − n · n O ( t 2 ) Conclusion: if max a i , j ≤ t · min a i , j , then Linial–Samorodnitsky–Wigderson algoritm yields a multiplicative error n O ( t 2 ) Doesn’t cover matrices with zeros. Mark Rudelson (Michigan) Permanent estimators via random matrices 5 / 25

  14. Probabilistic estimates Jerrum–Sinclair–Vigoda algorithm estimates the permanent of any matrix with constant multiplicative error with high probability. Mark Rudelson (Michigan) Permanent estimators via random matrices 6 / 25

  15. Probabilistic estimates Jerrum–Sinclair–Vigoda algorithm estimates the permanent of any matrix with constant multiplicative error with high probability. Deficiency: running time is O ( n 10 ) Mark Rudelson (Michigan) Permanent estimators via random matrices 6 / 25

  16. Probabilistic estimates Jerrum–Sinclair–Vigoda algorithm estimates the permanent of any matrix with constant multiplicative error with high probability. Deficiency: running time is O ( n 10 ) Godsil–Gutman estimator Let A 1 / 2 be the matrix with entries a 1 / 2 i , j . Let R be an n × n random matrix with i.i.d. ± 1 entries. w i , j = √ a i , j · r i , j . Form the Hadamard product R ⊙ A 1 / 2 : Then perm ( A ) = E det 2 ( R ⊙ A 1 / 2 ) . Estimator: perm ( A ) ≈ det 2 ( R ⊙ A 1 / 2 ) . Mark Rudelson (Michigan) Permanent estimators via random matrices 6 / 25

  17. Probabilistic estimates Jerrum–Sinclair–Vigoda algorithm estimates the permanent of any matrix with constant multiplicative error with high probability. Deficiency: running time is O ( n 10 ) Godsil–Gutman estimator Let A 1 / 2 be the matrix with entries a 1 / 2 i , j . Let R be an n × n random matrix with i.i.d. ± 1 entries. w i , j = √ a i , j · r i , j . Form the Hadamard product R ⊙ A 1 / 2 : Then perm ( A ) = E det 2 ( R ⊙ A 1 / 2 ) . Estimator: perm ( A ) ≈ det 2 ( R ⊙ A 1 / 2 ) . Advantage: Godsil–Gutman estimator is faster than any other algorithm. Mark Rudelson (Michigan) Permanent estimators via random matrices 6 / 25

  18. Probabilistic estimates Jerrum–Sinclair–Vigoda algorithm estimates the permanent of any matrix with constant multiplicative error with high probability. Deficiency: running time is O ( n 10 ) Godsil–Gutman estimator Let A 1 / 2 be the matrix with entries a 1 / 2 i , j . Let R be an n × n random matrix with i.i.d. ± 1 entries. w i , j = √ a i , j · r i , j . Form the Hadamard product R ⊙ A 1 / 2 : Then perm ( A ) = E det 2 ( R ⊙ A 1 / 2 ) . Estimator: perm ( A ) ≈ det 2 ( R ⊙ A 1 / 2 ) . Advantage: Godsil–Gutman estimator is faster than any other algorithm. Deficiency: Godsil–Gutman estimator performs well for “generic” matrices, but fails for large classes of { 0 , 1 } matrices, because of arithmetic issues. Mark Rudelson (Michigan) Permanent estimators via random matrices 6 / 25

  19. Barvinok’s estimator Godsil–Gutman estimator Let A 1 / 2 be the matrix with entries a 1 / 2 i , j . Let R be an n × n random matrix with i.i.d. ± 1 entries. Form the Hadamard product R ⊙ A 1 / 2 . Then perm ( A ) = E det 2 ( R ⊙ A 1 / 2 ) . Estimator: perm ( A ) ≈ det 2 ( R ⊙ A 1 / 2 ) . Mark Rudelson (Michigan) Permanent estimators via random matrices 7 / 25

  20. Barvinok’s estimator Let A 1 / 2 be the matrix with entries a 1 / 2 Barvinok’s estimator i , j . Let G be an n × n random matrix with i.i.d. N ( 0 , 1 ) entries. Form the Hadamard product G ⊙ A 1 / 2 . Then perm ( A ) = E det 2 ( G ⊙ A 1 / 2 ) . Estimator: perm ( A ) ≈ det 2 ( G ⊙ A 1 / 2 ) . Barvinok’s estimator has no arithmetic issues. Mark Rudelson (Michigan) Permanent estimators via random matrices 7 / 25

  21. Barvinok’s estimator Let A 1 / 2 be the matrix with entries a 1 / 2 Barvinok’s estimator i , j . Let G be an n × n random matrix with i.i.d. N ( 0 , 1 ) entries. Form the Hadamard product G ⊙ A 1 / 2 . Then perm ( A ) = E det 2 ( G ⊙ A 1 / 2 ) . Estimator: perm ( A ) ≈ det 2 ( G ⊙ A 1 / 2 ) . Barvinok’s estimator has no arithmetic issues. Theorem (Barvinok) Let A be any n × n matrix. Then, with probability 1 − δ , (( 1 − ε ) · θ ) n perm ( A ) ≤ det 2 ( G ⊙ A 1 / 2 ) ≤ C perm ( A ) , where C is an absolute constant and θ = 0 . 28 for real Gaussian matrices; θ = 0 . 56 for complex Gaussian matrices; θ = 0 . 76 for quaternionic Gaussian matrices; Mark Rudelson (Michigan) Permanent estimators via random matrices 7 / 25

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend