fast computation of power series solutions of systems of
play

Fast computation of power series solutions of systems of - PowerPoint PPT Presentation

Fast computation of power series solutions of systems of differential equations Alin Bostan ALGO, INRIA Rocquencourt joint work with F. Chyzak , F. Ollivier , B. Salvy , E. Schost , A. Sedoglavic Context find fast algorithms for solving (in


  1. Fast computation of power series solutions of systems of differential equations Alin Bostan ALGO, INRIA Rocquencourt joint work with F. Chyzak , F. Ollivier , B. Salvy , ´ E. Schost , A. Sedoglavic

  2. Context find fast algorithms for solving (in power series) ordinary differential equations applications: combinatorics, numerical analysis (control), algebraic complexity ◮ fast means using few operations ( ± , × , ÷ ) in the base field K .

  3. More precisely Given a linear differential equation with power series coefficients a r ( t ) y ( r ) ( t ) + · · · + a 1 ( t ) y ′ ( t ) + a 0 ( t ) y ( t ) = 0 compute the first N terms of a (basis of) power(s) series solution(s) y ( t ) . ◮ naive algorithm (undetermined coefficients), O ( rN 2 ) . ◮ Best that can be hoped: complexity linear in N and polynomial in r . ◮ Same problem for linear systems Y ′ = AY and for non-linear systems.

  4. Fast polynomial (matrix) multiplication M ( N ) = complexity of polynomial multiplication in degree < N O ( N 2 ) by the na¨ = ıve algorithm � N 1 . 58 � = O by Karatsuba ’s algorithm N log α (2 α − 1) � � = O by Toom-Cook algorithm � � = O N log( N ) loglog( N ) by Sch¨ onhage–Strassen FFT MM ( r, N ) = complexity of polynomial matrix mult. , deg < N, size = r O ( r ω M ( N )) by the Cantor–Kaltofen algorithm = O ( r ω N + r 2 M ( N )) by the B.–Schost algorithm =

  5. Previous results � r = 1 Brent & Kung (1978): reduction to exponentials of power series + Newton iteration, O ( M ( N )) . � r = 2 Brent & Kung (1978): reduction to Riccati non-linear equations + linearization, O ( M ( N )) . � r > 1 Brent & Kung (1978), van der Hoeven (2002): order reduction + linearization, O ( r r M ( N )) . � r > 1 van der Hoeven (2002) 1 solution in O ( r M ( N ) log N ) , model of relaxed multiplication

  6. New results Problem constant polynomial power series output ( input, output ) coefficients coefficients coefficients size O ( dr 2 N ) ( eqn, basis ) O ( M ( r ) N ) O ( MM ( r, N )) O ( rN ) ( eqn, 1 sol ) O ( M ( r ) N/r ) O ( drN ) O ( r M ( N ) log N ) O ( N ) O ( dr ω N ) O ( r 2 N ) ( sys, basis ) O ( r M ( r ) N ) O ( MM ( r, N )) O ( r 2 M ( N ) log N ) O ( dr 2 N ) ( sys, 1 sol ) O ( M ( r ) N ) O ( rN )

  7. Experimental results "MatMul.dat" "Newton.dat" time (in seconds) time (in seconds) 8 30 7 25 6 20 5 4 15 3 10 2 5 1 0 0 10 10 5000 5000 9 9 4500 4500 8 8 4000 4000 7 7 3500 3500 6 6 3000 3000 2500 5 2500 5 order order precision precision 2000 4 2000 4 1500 1500 3 3 1000 1000 2 2

  8. Experimental results N ... r 2 4 8 16 256 0.02 vs. 2.09 0.08 vs. 6 0.44 vs. 28 3 vs. 169 512 0.04 vs. 8 0.17 vs. 25 1 vs. 113 6.41 vs. 688 1024 0.08 vs. 32 0.39 vs. 104 2.30 vs. 484 15 vs. 2795 36 vs. > 3 h ⋆ 2048 0.18 vs. 128 0.94 vs. 424 5.54 vs. 2025 92 vs. > 1 / 2 day ⋆ 4096 0.42 vs. 503 2.26 vs. 1686 13.69 vs. 8348 Basis, system with r equations, precision N : new vs. naive

  9. Experimental results 16 70 ’DAC.dat’ ’dac.dat’ ’naive.dat’ 14 60 12 50 10 time (in seconds) time (in seconds) 40 8 30 6 20 4 10 2 0 0 0 500 1000 1500 2000 2500 3000 3500 4000 200 400 600 800 1000 1200 1400 1600 precision precision Left: DAC computation of one solution for LDE of orders 2 , 4 , and 8 Right: DAC vs. naive, one solution of a LDE of order 2

  10. The divide-and-conquer algorithm i a i ( t ) δ i , with δ = t d Problem: solve L y = 0 , where L = � dt . Idea: the lowest terms of y ( t ) only depend on the lowest terms of a i . Proof: if y = y 0 + t m y 1 , then L ( δ ) y = L ( δ ) y 0 + t m L ( δ + m ) y 1 L ( δ ) y = 0 mod t 2 m : DAC algorithm to solve determine y 0 , by recursively solving L ( δ ) y 0 = 0 mod t m ; compute the “error” R , such that L ( δ ) y 0 = t m R mod t 2 m ; compute y 1 , by recursively solving L ( δ + m ) y 1 = − R mod t m . C ( N ) = 2 C ( N/ 2) + O ( r M ( N )) C ( N ) = O ( r M ( N ) log N ) ➥ ◮ Newton method: computing a whole basis of solutions in 1. will allow to determine y 1 in 3. from y 0 and R alone, without a second recursive call: C ( N ) = ˜ ˜ ˜ C ( N/ 2) + O ( M ( r, N )) C ( N ) = O ( M ( r, N )) ➥

  11. The divide-and-conquer algorithm i a i ( t ) δ i , with δ = t d Problem: solve L y = 0 , where L = � dt . Idea: the lowest terms of y ( t ) only depend on the lowest terms of a i . Proof: if y = y 0 + t m y 1 , then L ( δ ) y = L ( δ ) y 0 + t m L ( δ + m ) y 1 L ( δ ) y = 0 mod t 2 m : DAC algorithm to solve 1. determine y 0 , by recursively solving L ( δ ) y 0 = 0 mod t m ; 2. compute the “error” R , such that L ( δ ) y 0 = t m R mod t 2 m ; 3. compute y 1 , by recursively solving L ( δ + m ) y 1 = − R mod t m . C ( N ) = 2 C ( N/ 2) + O ( r M ( N )) C ( N ) = O ( r M ( N ) log N ) ➥ ◮ Newton method: computing a whole basis of solutions in 1. will allow to determine y 1 in 3. from y 0 and R alone, without a second recursive call: C ( N ) = ˜ ˜ ˜ C ( N/ 2) + O ( M ( r, N )) C ( N ) = O ( M ( r, N )) ➥

  12. The divide-and-conquer algorithm i a i ( t ) δ i , with δ = t d Problem: solve L y = 0 , where L = � dt . Idea: the lowest terms of y ( t ) only depend on the lowest terms of a i . Proof: if y = y 0 + t m y 1 , then L ( δ ) y = L ( δ ) y 0 + t m L ( δ + m ) y 1 L ( δ ) y = 0 mod t 2 m : DAC algorithm to solve 1. determine y 0 , by recursively solving L ( δ ) y 0 = 0 mod t m ; 2. compute the “error” R , such that L ( δ ) y 0 = t m R mod t 2 m ; 3. compute y 1 , by recursively solving L ( δ + m ) y 1 = − R mod t m . C ( N ) = 2 C ( N/ 2) + O ( r M ( N )) C ( N ) = O ( r M ( N ) log N ) ➥ ◮ Newton method: computing a whole basis of solutions in 1. will allow to determine y 1 in 3. from y 0 and R alone, without a second recursive call: C ( N ) = ˜ ˜ ˜ C ( N/ 2) + O ( M ( r, N )) C ( N ) = O ( M ( r, N )) ➥

  13. Newton iteration: real case x N(x) x κ +1 = x κ − ( x 2 κ − 2) / (2 x κ ) , x 0 = 1 . 5 x 1 = 1 . 4166666666666666666666666666667 x 2 = 1 . 4142156862745098039215686274510 x 3 = 1 . 4142135623746899106262955788901 x 4 = 1 . 4142135623730950488016896235025

  14. Newton iteration: power series case Let ϕ : K [[ t ]] → K [[ t ]] . To solve ϕ ( g ) = 0 in K [[ t ]] , one can apply Newton’s tangent method : g κ +1 = g κ − ϕ ( g κ ) mod t 2 κ +1 . ϕ ′ ( g κ ) ◮ The number of correct coefficients doubles after each iteration. � � ◮ Total cost = 2 × the cost of the last iteration . Theorem [Cook (1966), Sieveking (1972) & Kung (1974), Brent 1975] Division, logarithm and exponential of a power series in K [[ t ]] can be computed at precision N using O ( M (N)) operations in K .

  15. Division and logarithm of power series ◮ To compute the reciprocal of f ∈ K [[ t ]] , choose ϕ ( g ) = 1 /g − f : g 0 = 1 mod t 2 κ +1 g κ +1 = 2 g κ − fg 2 and for κ ≥ 0 . κ f 0 division of power series at precision N in O ( M ( N )) . ◮ i (1 − f ) i of f ∈ 1 + t K [[ t ]] in O ( M ( N )) by: 1 log( f ) = − � ◮ i ≥ 1 1. computing the Taylor expansion of h = f ′ /f modulo t N − 1 ; 2. taking the antiderivative of h .

  16. Exponentials of power series 1 i ! f i , ◮ To compute the first N terms of the exponential exp( f ) = � i ≥ 0 choose ϕ ( g ) = log( g ) − f . Iteration: mod t 2 κ +1 g 0 = 1 and g κ +1 = g κ − g κ (log( g κ ) − f ) for κ ≥ 0 . ◮ First order differential equations: compute the first N terms of f ∈ K [[ t ]] such that af ′ + bf = c . � • if c = 0 then the solution is f 0 = exp( − b/a ) • else, variation of constants : f = f 0 g , where g ′ = c/af 0 .

  17. Intermezzo: constant coefficients case Problem: solve y ′ ( t ) = A y ( t ) , y (0) = v , where A is a scalar matrix. � Equivalent question: compute y ( t ) = exp( Av ) . Warning: this equivalence is no longer true in the non-scalar case! i =0 A i vt i of y ( t ) is the truncation Idea: the Laplace transform z N = � N − 1 i ≥ 0 A i vt i = ( I − tA ) − 1 v . at order N of z = � Algorithm (sketch): 1. compute z as a rational function of degre ≤ r (indep. on N ); 2. deduce its Taylor expansion modulo t N : O ( N/r M ( r )) to expand each coordinate of z . a b = c + d e + f � � b = c + b

  18. Brent & Kung’s algorithm for non-linear equations 1. reduce y ′′ ( t ) + a ( t ) y ′ ( t ) + b ( t ) y ( t ) = 0 to a 1st-order equation ◮ factor D 2 + a ( t ) D + b ( t ) as ( D + S ( t ))( D + T ( t )) : S + T = a, T ′ + ST = b, T ′ + aT − T 2 − b = 0 thus 2. reduce the Riccati equation to a linear equation (by linearization) 3. find S, T and solve two linear 1st order equations to get y ( t ) ◮ generalizes to arbitrary orders: Lin ( r, N ) = NonLin ( r − 1 , N ) + O ( r M ( N )) + Lin ( r − 1 , N ) NonLin ( r − 1 , N ) = O ( Lin ( r − 1 , N )) Lin ( r, N ) = O ( r r M ( N )) . ➥

  19. Newton iteration hits again Suppose we have to solve a “functional” equation φ ( Y ) = 0 , where φ : M r × r ( K [[ t ]]) → M r × r ( K [[ t ]]) is differentiable. Define the sequence Y κ +1 = Y κ − U κ +1 , where • U κ +1 is a solution of valuation ≥ 2 κ +1 of the linearized equation Dφ | Y κ · U = φ ( Y κ ) , • Dφ | Y κ is the differential of φ at Y κ . Then, the sequence Y κ converges quadratically to the solution Y .

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend