on the use of polynomial matrix approximant in the block
play

On the use of polynomial matrix approximant in the block Wiedemann - PowerPoint PPT Presentation

On the use of polynomial matrix approximant in the block Wiedemann algorithm Pascal Giorgi Symbolic Computation Group , Laboratoire LIP, School of Computer Science, ENS Lyon, France ECOLE NORMALE SUPERIEURE DE LYON University of Waterloo,


  1. On the use of polynomial matrix approximant in the block Wiedemann algorithm Pascal Giorgi Symbolic Computation Group , Laboratoire LIP, School of Computer Science, ENS Lyon, France ECOLE NORMALE SUPERIEURE DE LYON University of Waterloo, Canada in collaboration with C-P. Jeannerod and G. Villard ENS-Lyon (France) June 6, CMS/CSHPM Summer 2005 Meeting Mathematics of Computer Algebra and Analysis

  2. Motivations Large sparse linear systems are involved in many mathematical applications over a field : ◮ integers factorization [Odlyzko 1999]] , ◮ discrete logarithm [Odlyzko 1999 ; Thom´ e 2003] , over the integers : ◮ number theory [Cohen 1993] , ◮ group theory [Newman 1972] , ◮ integer programming [Aardal, Hurkens, Lenstra 1999]

  3. Solving sparse system over finite fields Matrices arising in pratice have dimensions around few millions with few millions of non zero entries The use of classic Gaussian elimination is proscribed due to fill-in

  4. Solving sparse system over finite fields Matrices arising in pratice have dimensions around few millions with few millions of non zero entries The use of classic Gaussian elimination is proscribed due to fill-in = ⇒ algorithms must preserve the sparsity of the matrix

  5. Solving sparse system over finite fields Matrices arising in pratice have dimensions around few millions with few millions of non zero entries The use of classic Gaussian elimination is proscribed due to fill-in = ⇒ algorithms must preserve the sparsity of the matrix Iteratives methods revealed successful over a finite field : ◮ Krylov/Wiedemann method [Wiedemann 1986] ◮ conjugate gradient [Lamacchia, Odlyzko 1990] , ◮ Lanczos method [Lamacchia, Odlyzko 1990 ; Lambert 1996] , ◮ block Lanczos method [Coppersmith 1993, Montgomery 1995] ◮ block Krylov/Wiedemann method [Coppersmith 1994, Thom´ e 2002]

  6. Scheme of Krylov/Wiedemann method F N × N of full rank and b ∈ I F N . Then x = A − 1 b can be Let A ∈ I expressed as a linear combination of the Krylov subspace { b , Ab , ..., A N b }

  7. Scheme of Krylov/Wiedemann method F N × N of full rank and b ∈ I F N . Then x = A − 1 b can be Let A ∈ I expressed as a linear combination of the Krylov subspace { b , Ab , ..., A N b } Let Π Ab ( λ ) = c 0 + c 1 λ + ... + λ d ∈ I F [ λ ] be the minimal polynomial of the sequence { A i b } ∞ i =0

  8. Scheme of Krylov/Wiedemann method F N × N of full rank and b ∈ I F N . Then x = A − 1 b can be Let A ∈ I expressed as a linear combination of the Krylov subspace { b , Ab , ..., A N b } Let Π Ab ( λ ) = c 0 + c 1 λ + ... + λ d ∈ I F [ λ ] be the minimal polynomial of the sequence { A i b } ∞ i =0 − 1 ( c 1 b + c 2 Ab + ... + A d − 1 b ) A × = b c 0 � �� �

  9. Scheme of Krylov/Wiedemann method F N × N of full rank and b ∈ I F N . Then x = A − 1 b can be Let A ∈ I expressed as a linear combination of the Krylov subspace { b , Ab , ..., A N b } Let Π Ab ( λ ) = c 0 + c 1 λ + ... + λ d ∈ I F [ λ ] be the minimal polynomial of the sequence { A i b } ∞ i =0 − 1 ( c 1 b + c 2 Ab + ... + A d − 1 b ) A × = b c 0 � �� � x

  10. Scheme of Krylov/Wiedemann method F N × N of full rank and b ∈ I F N . Then x = A − 1 b can be Let A ∈ I expressed as a linear combination of the Krylov subspace { b , Ab , ..., A N b } Let Π Ab ( λ ) = c 0 + c 1 λ + ... + λ d ∈ I F [ λ ] be the minimal polynomial of the sequence { A i b } ∞ i =0 − 1 ( c 1 b + c 2 Ab + ... + A d − 1 b ) A × = b c 0 � �� � x F N uniformaly and randomly and compute the minimal Choose u ∈ I polynomial Π u , Ab of the scalar sequence { u T A i b } ∞ i =0 . with probability greater than 1 − deg(Π Ab ) F ) we have Π Ab = Π u , AB . Card ( I

  11. Wiedemann algorithm Three steps : 1. compute 2 N + ǫ elements of the sequence { u T A i b } ∞ i =0 . 2. compute the minimal polynomial of the scalar sequence. 3. compute the linear combination of { b , Ab , ..., A d − 1 b } .

  12. Wiedemann algorithm Three steps : 1. compute 2 N + ǫ elements of the sequence { u T A i b } ∞ i =0 . 2. compute the minimal polynomial of the scalar sequence. 3. compute the linear combination of { b , Ab , ..., A d − 1 b } . Let γ be the cost of applying a vector to A .

  13. Wiedemann algorithm Three steps : 1. compute 2 N + ǫ elements of the sequence { u T A i b } ∞ i =0 . 2. compute the minimal polynomial of the scalar sequence. 3. compute the linear combination of { b , Ab , ..., A d − 1 b } . Let γ be the cost of applying a vector to A . step 1 : 2 N matrix-vector products + 2 N dot products ⇒ cost 2 N γ + O ( N 2 ) field operations =

  14. Wiedemann algorithm Three steps : 1. compute 2 N + ǫ elements of the sequence { u T A i b } ∞ i =0 . 2. compute the minimal polynomial of the scalar sequence. 3. compute the linear combination of { b , Ab , ..., A d − 1 b } . Let γ be the cost of applying a vector to A . step 1 : 2 N matrix-vector products + 2 N dot products ⇒ cost 2 N γ + O ( N 2 ) field operations = step 2 : use of Berlekamp/Massey algorithm [Berlekamp 1968, Massey 1969] ⇒ cost O ( N 2 ) field operations =

  15. Wiedemann algorithm Three steps : 1. compute 2 N + ǫ elements of the sequence { u T A i b } ∞ i =0 . 2. compute the minimal polynomial of the scalar sequence. 3. compute the linear combination of { b , Ab , ..., A d − 1 b } . Let γ be the cost of applying a vector to A . step 1 : 2 N matrix-vector products + 2 N dot products ⇒ cost 2 N γ + O ( N 2 ) field operations = step 2 : use of Berlekamp/Massey algorithm [Berlekamp 1968, Massey 1969] ⇒ cost O ( N 2 ) field operations = step 3 : d − 1 matrix-vector products + d − 1 vectors operations ⇒ cost at most N γ + O ( N 2 ) field operations = total cost of O ( N γ + N 2 ) fied operations with O ( N ) additional space

  16. Block Wiedemann method Replace the projection vectors by blocks of vectors. F m × N and V = [ b ¯ F N × n . Let U ∈ I V ] ∈ I We now consider the matrix sequence { UA i V } ∞ i =0 . Main interest : ◮ parallel coarse and fine grain implementation (on columns of V ), ◮ better probability of success [Villard 1997] , ◮ (1 + ǫ ) N matrix-vector products (sequential) [Kaltofen 1995] . Difficulty : minimal generating matrix polynomial of a matrix sequence.

  17. Minimal generating matrix polynomial Let { S i } ∞ i =0 be a m × m matrix sequence. F m × m [ λ ] be minimal with degree k s.t. Let P ∈ I k � S i + j P [ i ] = 0 m × m ∀ j > 0 : i =0 the cost to compute P is : ◮ O ( m 3 k 2 ) field operations [Coppersmith 1994] , ◮ O ˜( m 3 k log k ) field operations [Beckermann, Labahn 1994 ; Thom´ e 2002] , ◮ O ˜( m ω k log k ) field operations [Giorgi, Jeannerod, Villard 2003] . where ω is the exponent in the complexity of matrix multiplication

  18. Minimal generating matrix polynomial Let { S i } ∞ i =0 be a m × m matrix sequence. F m × m [ λ ] be minimal with degree k s.t. Let P ∈ I k � S i + j P [ i ] = 0 m × m ∀ j > 0 : i =0 the cost to compute P is : ◮ O ( m 3 k 2 ) field operations [Coppersmith 1994] , ◮ O ˜( m 3 k log k ) field operations [Beckermann, Labahn 1994 ; Thom´ e 2002] , ◮ O ˜( m ω k log k ) field operations [Giorgi, Jeannerod, Villard 2003] . where ω is the exponent in the complexity of matrix multiplication Latter complexity is based on σ -bases computation with : • divide and conquer approach (idea from [Beckermann, Labahn 1994] ) • matrix product-based Gaussian elimination [Ibarra et al 1982]

  19. Minimal approximant basis : σ -basis Problem : F m × n [[ λ ]] and an approximation order Given a matrix power series G ∈ I F m × m [ λ ] s.t. d ; find the minimal nonsingular polynomial matrix M ∈ I MG = λ d R ∈ I F m × n [[ λ ]] M is called an order d σ -bases of G .

  20. Minimal approximant basis : σ -basis Problem : F m × n [[ λ ]] and an approximation order Given a matrix power series G ∈ I F m × m [ λ ] s.t. d ; find the minimal nonsingular polynomial matrix M ∈ I MG = λ d R ∈ I F m × n [[ λ ]] M is called an order d σ -bases of G . minimality : Let f ( λ ) = G ( λ n )[1 , λ, λ 2 , ..., λ n ] T ∈ I F [[ λ ]] m F 1 × m [ λ ] such that every v ∈ I v ( λ n ) f ( λ ) = λ r w ( λ ) ∈ I F [[ λ ]] with r ≥ nd has a unique decomposition m � c ( i ) M ( i , ∗ ) with deg c ( i ) + deg M ( i , ∗ ) ≤ deg v v = i =1

  21. Sketch of the reduction divide and conquer : [Beckermann, Labahn 1994 : theorem 6.1] Given M ′ and M ′′ two order d/2 σ -basis of respectively G and λ − d 2 M ′ G . The polynomial matrix M = M ′ M ′′ is an order d σ -bases of G . base case (order 1 σ -basis) : ◮ compute ∆ = G mod λ , ◮ compute the LSP-factorization of π ∆, with π a permutation, ◮ return M = DL − 1 π where D is a diagonal matrix (with 1’s and λ ’s)

  22. Sketch of the reduction divide and conquer : [Beckermann, Labahn 1994 : theorem 6.1] Given M ′ and M ′′ two order d/2 σ -basis of respectively G and λ − d 2 M ′ G . The polynomial matrix M = M ′ M ′′ is an order d σ -bases of G . base case (order 1 σ -basis) : ◮ compute ∆ = G mod λ , ◮ compute the LSP-factorization of π ∆, with π a permutation, ◮ return M = DL − 1 π where D is a diagonal matrix (with 1’s and λ ’s) cost : • C ( m , n , d ) = 2 C ( m , n , d / 2) + 2 MM ( m , d / 2) • C ( m , n , 1) = O ( MM ( m )) = ⇒ reduction to polynomial matrix multiplication

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend