analysis of euclidean algorithms an arithmetical instance
play

ANALYSIS of EUCLIDEAN ALGORITHMS An Arithmetical Instance of - PowerPoint PPT Presentation

ANALYSIS of EUCLIDEAN ALGORITHMS An Arithmetical Instance of Dynamical Analysis Dynamical Analysis := Analysis of Algorithms + Dynamical Systems Brigitte Vall ee (CNRS and Universit e de Caen, France) Results obtained with : Ali Akhavi ,


  1. Three main outputs for any Euclidean Algorithm – the gcd( u, v ) itself Essential in exact rational computations, for keeping rational numbers under their irreducible forms 60% of the computation time in some symbolic computations – the Continued Fraction Expansion CFE ( u/v ) Often used directly in computation over rationals.

  2. Three main outputs for any Euclidean Algorithm – the gcd( u, v ) itself Essential in exact rational computations, for keeping rational numbers under their irreducible forms 60% of the computation time in some symbolic computations – the Continued Fraction Expansion CFE ( u/v ) Often used directly in computation over rationals. – the modular inverse u − 1 mod v , when gcd( u, v ) = 1 . Extensively used in cryptography A basic algorithm ... Perhaps the fifth main operation?

  3. Main algorithmic questions. – Analyse the behaviour of these various Euclidean algorithms – Compare them with respect to various costs and particularly the bit–complexity. Experimental comparison of bit–complexities. A gaussian law for the number of steps?

  4. Comparison for five algorithms on the input (2011176 , 72001) Evolution of the remainders Standard Centered By-Excess Binary LSB 67149 4852 4852 44849 51637 4852 779 779 1697 12485 4073 178 601 1697 2447 779 67 423 125 3733 178 23 245 125 1545 67 2 67 9 547 44 1 23 9 523 23 – 2 5 3 19 – 1 1 65 4 – – – 17 3 – – – 3 1 – – – 1

  5. Comparison for five algorithms on the input (2011176 , 72001) Evolution of the remainders Standard Centered By-Excess Binary LSB 67149 4852 4852 44849 51637 4852 779 779 1697 12485 4073 178 601 1697 2447 779 67 423 125 3733 178 23 245 125 1545 67 2 67 9 547 44 1 23 9 523 23 – 2 5 3 19 – 1 1 65 4 – – – 17 3 – – – 3 1 – – – 1

  6. Explain the behaviour of algorithms For instance, an execution of the LSB Algorithm : the Tortoise and the Hare 0 10001100101000001 1 111101011000000101000 2 11001001101101010000 3 110000110001010000000 4 10011000111100000000 5 111010010101000000000 6 110000010010000000000 7 100010001100000000000 8 1000001011000000000000 9 1100000000000000 10 1000001000000000000000 11 100010000000000000000 12 110000000000000000000 13 10000000000000000000000

  7. Plan of the Talk I– The Euclid Algorithm, and the underlying dynamical system II– The other Euclidean Algorithms III– Probabilistic –and dynamical– analysis of algorithms IV– Euclidean algorithms : the underlying dynamical systems V– Dynamical analysis of Euclidean algorithms

  8. Probabilistic Analysis of Algorithms An algorithm with a set of inputs Ω , and a parameter (or a cost) C defined on Ω which describes – the execution of the algorithm (number of iterations, bit–complexity) – or the geometry of the output (here: the continued fraction)

  9. Probabilistic Analysis of Algorithms An algorithm with a set of inputs Ω , and a parameter (or a cost) C defined on Ω which describes – the execution of the algorithm (number of iterations, bit–complexity) – or the geometry of the output (here: the continued fraction) – Gather the inputs wrt to their sizes (here, their number of bits) Ω n := { ( u, v ) ∈ Ω , size( u, v ) = n } . – Consider a distribution on Ω n (for instance the uniform distribution), – Study the cost C on Ω n in a probabilistic way: – Estimate the mean value of C n := C | Ω n , its variance, its distribution... in an asymptotic way (for n → ∞ )

  10. The main costs of interest for Euclidean Algorithms – The additive costs, which depend on the digits p � C ( u, v ) := c ( m i ) i =1 if c = 1 , then C := the number of iterations if c = 1 m 0 , then C := the number of digits equal to m 0 if c = ℓ (the binary length), then C := the length of the CFE

  11. The main costs of interest for Euclidean Algorithms – The additive costs, which depend on the digits p � C ( u, v ) := c ( m i ) i =1 if c = 1 , then C := the number of iterations if c = 1 m 0 , then C := the number of digits equal to m 0 if c = ℓ (the binary length), then C := the length of the CFE – The bit complexity (not an additive cost) p � C ( u, v ) := ℓ ( u i ) ℓ ( m i ) i =1

  12. The results (I) Previous results – mostly in the average-case, – only for the number of iterations, and specific to particular algorithms... – well–described in Knuth’s book (Tome II)

  13. The results (I) Previous results – mostly in the average-case, – only for the number of iterations, and specific to particular algorithms... – well–described in Knuth’s book (Tome II) Heilbronn, Dixon, Rieger (70): Standard and Centered Alg. Yao and Knuth (75): Subtractive Alg. Brent (78): Binary Alg (partly heuristic), Hensley (94) : A distributional study for the Standard Alg. Stehl´ e and Zimmermann (05) : LSB Alg (experiments)

  14. The new results With Dynamical Analysis method, our group [1995 → now ] obtains

  15. The new results With Dynamical Analysis method, our group [1995 → now ] obtains – a complete classification into two classes, – the Fast Class = { Standard, Centered, Binary, LSB } , – the Slow Class = { By-Excess, Subtractive } .

  16. The new results With Dynamical Analysis method, our group [1995 → now ] obtains – a complete classification into two classes, – the Fast Class = { Standard, Centered, Binary, LSB } , – the Slow Class = { By-Excess, Subtractive } . – an average-case analysis of a broad class of costs, – all the additive costs – and also the bit–complexity.

  17. The new results With Dynamical Analysis method, our group [1995 → now ] obtains – a complete classification into two classes, – the Fast Class = { Standard, Centered, Binary, LSB } , – the Slow Class = { By-Excess, Subtractive } . – an average-case analysis of a broad class of costs, – all the additive costs – and also the bit–complexity. – a distributional analysis of a subclass of the Fast Class, the Good Class = { Standard, Centered } . Asymptotic gaussian laws hold for: – P , and additive costs of moderate growth, – the remainder size log u i for i ∼ δP , – the bit-complexity of the extended Alg.

  18. Here, focus on average-case results ( n := input size)

  19. Here, focus on average-case results ( n := input size) – For the Fast Class = { Standard, Centered, Binary, LSB } , – the mean values of costs P, C are linear wrt n , – the mean bit-complexity is quadratic. E n [ P ] ∼ 2 log 2 E n [ C ] ∼ 2 log 2 E n [ B ] ∼ log 2 h ( S ) µ [ ℓ ] n 2 . h ( S ) n, h ( S ) µ [ c ] n, h ( S ) is the entropy of the system, µ [ c ] the mean value of step–cost c . E n [ C k ] ∼ E n [ C ] k – Moreover, these costs are concentrated:

  20. Here, focus on average-case results ( n := input size) – For the Fast Class = { Standard, Centered, Binary, LSB } , – the mean values of costs P, C are linear wrt n , – the mean bit-complexity is quadratic. E n [ P ] ∼ 2 log 2 E n [ C ] ∼ 2 log 2 E n [ B ] ∼ log 2 h ( S ) µ [ ℓ ] n 2 . h ( S ) n, h ( S ) µ [ c ] n, h ( S ) is the entropy of the system, µ [ c ] the mean value of step–cost c . E n [ C k ] ∼ E n [ C ] k – Moreover, these costs are concentrated: – For the Slow Class = { By-Excess, Subtractive } , – the mean values of costs P, C are quadratic, – the mean bit-complexity of B is cubic, E n [ C k ] = Θ(2 n ( k − 1) ) . – the moments of order k ≥ 2 are exponential:

  21. The main constant h ( S ) is the entropy of the Dynamical System. A well-defined mathematical object, computable.

  22. The main constant h ( S ) is the entropy of the Dynamical System. A well-defined mathematical object, computable. – Related to classical constants for the first two algs π 2 π 2 h ( S ) = 6 log 2 ∼ 2 . 37 [Standard] , h ( S ) = 6 log φ ∼ 3 . 41 [Centered] .

  23. The main constant h ( S ) is the entropy of the Dynamical System. A well-defined mathematical object, computable. – Related to classical constants for the first two algs π 2 π 2 h ( S ) = 6 log 2 ∼ 2 . 37 [Standard] , h ( S ) = 6 log φ ∼ 3 . 41 [Centered] . – For the LSB alg, h ( S ) = 4 − 2 γ ∼ 3 . 91 involves the Lyapounov exponent γ of the set of random matrices, where � � N a,k = 1 2 k 0 with k ≥ 1 , a odd , | a | < 2 k is taken with prob. 2 − 2 k , 2 k 2 k a

  24. The main constant h ( S ) is the entropy of the Dynamical System. A well-defined mathematical object, computable. – Related to classical constants for the first two algs π 2 π 2 h ( S ) = 6 log 2 ∼ 2 . 37 [Standard] , h ( S ) = 6 log φ ∼ 3 . 41 [Centered] . – For the LSB alg, h ( S ) = 4 − 2 γ ∼ 3 . 91 involves the Lyapounov exponent γ of the set of random matrices, where � � N a,k = 1 2 k 0 with k ≥ 1 , a odd , | a | < 2 k is taken with prob. 2 − 2 k , 2 k 2 k a – For the Binary alg, h ( S ) = π 2 f (1) ∼ 3 . 6 involves the value f (1) of the unique density which satisfies the functional equation � � 2 � � � � 1 1 f ( x ) = f 2 k x + a 2 k x + a k ≥ 1 a odd 1 ≤ a< 2 k

  25. Precise comparisons between the four Fast Algorithms Algs Nb of iterations Bit-complexity 1 . 242 n 2 Standard 0 . 584 n 1 . 126 n 2 Centered 0 . 406 n 0 . 720 n 2 (Ind.) Binary 0 . 381 n 1 . 115 n 2 LSB 0 . 511 n

  26. Main principles of Dynamical Analysis := Analysis of Algorithms + Dynamical Systems

  27. 1– Interaction between the discrete world and the continuous world. Three steps. ( a ) The discrete algorithm is extended into a continuous process..... ( b ) .... which is studied – more easily, using all the analytic tools.

  28. 1– Interaction between the discrete world and the continuous world. Three steps. ( a ) The discrete algorithm is extended into a continuous process..... ( b ) .... which is studied – more easily, using all the analytic tools. ( c ) Returning to the discrete algorithm, with various principles of transfer from continuous to discrete. The discrete data are of zero measure amongst the continuous data.

  29. Main tools for probabilistic analysis of algorithms 2– Generating functions ? A classical tool : Generating functions of various types � � � z n a n ˆ a n z n , � A ( z ) := A ( z ) := a n n ! , A ( s ) := n s n ≥ 0 n ≥ 0 n ≥ 1 Directly used when the distribution of data does not change too much during the execution of the algorithm (for instance: the Euclid Algorithm on polynomials)

  30. Main tools for probabilistic analysis of algorithms 2– Generating functions ? A classical tool : Generating functions of various types � � � z n a n ˆ a n z n , � A ( z ) := A ( z ) := a n n ! , A ( s ) := n s n ≥ 0 n ≥ 0 n ≥ 1 Directly used when the distribution of data does not change too much during the execution of the algorithm (for instance: the Euclid Algorithm on polynomials) Here, this is not the case .... due to the existence of the carries The study of the dynamical system underlying the algorithm explains how the distribution of data evolves during the execution of the algorithm. It also describes the behaviour of the generating functions of costs...

  31. Main tools for probabilistic analysis of algorithms 3- Dynamical Analysis –main principles. Input.- A discrete algorithm. Step 1.- Extend the discrete algorithm into a continuous process, i.e. a dynamical system. ( X, V ) X compact, V : X → X , where the discrete alg. gives rise to particular trajectories.

  32. Main tools for probabilistic analysis of algorithms 3- Dynamical Analysis –main principles. Input.- A discrete algorithm. Step 1.- Extend the discrete algorithm into a continuous process, i.e. a dynamical system. ( X, V ) X compact, V : X → X , where the discrete alg. gives rise to particular trajectories. Step 2.- Study this dynamical system, via its generic trajectories. A main tool: the transfer operator.

  33. Main tools for probabilistic analysis of algorithms 3- Dynamical Analysis –main principles. Input.- A discrete algorithm. Step 1.- Extend the discrete algorithm into a continuous process, i.e. a dynamical system. ( X, V ) X compact, V : X → X , where the discrete alg. gives rise to particular trajectories. Step 2.- Study this dynamical system, via its generic trajectories. A main tool: the transfer operator. Step 3.- Coming back to the algorithm: we need proving that “the discrete trajectories behaves like the generic trajectories”. Use the transfer operator as a generating operator, which generates itself ..... the generating functions Output.- Probabilistic analysis of the Algorithm.

  34. Dynamical analysis of a Euclidean Algorithm.

  35. Dynamical analysis of a Euclidean Algorithm. A Euclidean Algorithm ⇓ Arithmetic properties of the division ⇓

  36. Dynamical analysis of a Euclidean Algorithm. A Euclidean Algorithm ⇓ Arithmetic properties of the division ⇓ Geometric properties of the branches ⇓ Spectral properties of the transfer operator ⇓ Analytical properties of the Quasi-Inverse of the transfer operator

  37. Dynamical analysis of a Euclidean Algorithm. A Euclidean Algorithm ⇓ Arithmetic properties of the division ⇓ Geometric properties of the branches ⇓ Spectral properties of the transfer operator ⇓ Analytical properties of the Quasi-Inverse of the transfer operator ⇓ Analytical properties of the generating function ⇓ Probabilistic analysis of the Euclidean Algorithm

  38. Plan of the Talk I– The Euclid Algorithm, and the underlying dynamical system II– The other Euclidean Algorithms III– Probabilistic –and dynamical– analysis of algorithms IV– Euclidean algorithms : the underlying dynamical systems V– Dynamical analysis of Euclidean algorithms

  39. Four Euclidean dynamical systems (related to MSB divisions)

  40. Four Euclidean dynamical systems (related to MSB divisions)

  41. Four Euclidean dynamical systems (related to MSB divisions) Two different classes Fast Class

  42. Four Euclidean dynamical systems (related to MSB divisions) Two different classes Fast Class Slow Class

  43. Dynamical Systems relative to MSB Algorithms. Key Property : Expansiveness of branches of the shift T | T ′ ( x ) | ≥ A > 1 for all x in I When true, this implies a chaotic behaviour for trajectories. The associated algos are Fast and belong to the Good Class When this condition is violated at only one indifferent point, this leads to intermittency phenomena. The associated algos are Slow. Chaotic Orbit [Fast Class], Intermittent Orbit [SlowClass].

  44. Induction Method For a DS ( I, T ) with a “slow” branch relative to a slow interval J , contract each part of the trajectory which belongs to J into one step. This (often) transforms the slow DS ( I, T ) into a fast one ( I, S ) : While x ∈ J do x := T ( x ) ; S ( x ) := T ( x ); The Induced DS of the Subtractive Alg = the DS of the Standard Alg.

  45. Two other Euclidean dynamical systems, related to mixed or LSB divisions: the Binary Algorithm and the LSB Algorithm. These algorithms use the 2 –adic valuation ν .... only defined on rationals. The 2–adic valuation ν is extended to a real random variable ν with Pr[ ν = k ] = 1 / 2 k for k ≥ 1 . This gives rise to probabilistic dynamic systems. (I) The DS relative to the Binary Algorithm

  46. Two other Euclidean dynamical systems, related to mixed or LSB divisions: the Binary Algorithm and the LSB Algorithm. These algorithms use the 2 –adic valuation ν .... only defined on rationals. The 2–adic valuation ν is extended to a real random variable ν with Pr[ ν = k ] = 1 / 2 k for k ≥ 1 . This gives rise to probabilistic dynamic systems. (I) The DS relative to the Binary Algorithm k = 1 k = 2 k = 1 and k = 2

  47. Two other Euclidean dynamical systems, related to mixed or LSB divisions: the Binary Algorithm and the LSB Algorithm. These algorithms use the 2 –adic valuation ν .... only defined on rationals. The 2–adic valuation ν is extended to a real random variable ν with Pr[ ν = k ] = 1 / 2 k for k ≥ 1 . This gives rise to probabilistic dynamic systems. (II) The DS relative to the LSB Algorithm

  48. Two other Euclidean dynamical systems, related to mixed or LSB divisions: the Binary Algorithm and the LSB Algorithm. These algorithms use the 2 –adic valuation ν .... only defined on rationals. The 2–adic valuation ν is extended to a real random variable ν with Pr[ ν = k ] = 1 / 2 k for k ≥ 1 . This gives rise to probabilistic dynamic systems. (II) The DS relative to the LSB Algorithm

  49. In all the cases (probabilistic or deterministic), the density transformer H expresses the new density f 1 as a function of the old density f 0 , as f 1 = H [ f 0 ] . It involves the set H � δ h ·| h ′ ( x ) |· f ◦ h ( x ) H [ f ]( x ) := (here, δ h = Pr[ h ] ) h ∈H With a cost c : H → R + , and two parameters ( s, w ) , it gives rise to the bivariate transfer operator � s · exp[ wc ( h )] · | h ′ ( x ) | s · f ◦ h ( x ) H s,w [ f ]( x ) := δ h h ∈H and the weighted transfer operator � s · c ( h ) · | h ′ ( x ) | s · f ◦ h ( x ) [ c ] [ f ]( x ) := H s δ h h ∈H

  50. Plan of the Talk I– The Euclid Algorithm, and the underlying dynamical system II– The other Euclidean Algorithms III– Probabilistic –and dynamical– analysis of algorithms IV– Euclidean algorithms : the underlying dynamical systems V– Dynamical analysis of Euclidean algorithms

  51. The Dirichlet series of cost C . If Ω is the whole set of inputs, the Dirichlet generating function of C � � � C ( u, v ) c m S C ( s ) = | ( u, v ) | 2 s = with c m := C ( u, v ) m 2 s ( u,v ) ∈ Ω m ≥ 1 ( u,v ) ∈ Ω | ( u,v ) | = m

  52. The Dirichlet series of cost C . If Ω is the whole set of inputs, the Dirichlet generating function of C � � � C ( u, v ) c m S C ( s ) = | ( u, v ) | 2 s = with c m := C ( u, v ) m 2 s ( u,v ) ∈ Ω m ≥ 1 ( u,v ) ∈ Ω | ( u,v ) | = m is used for expressing the mean value E n [ C ] of C on Ω n , with Ω n := { ( u, v ) ∈ Ω; ℓ ( | ( u, v ) | ) = n }

  53. The Dirichlet series of cost C . If Ω is the whole set of inputs, the Dirichlet generating function of C � � � C ( u, v ) c m S C ( s ) = | ( u, v ) | 2 s = with c m := C ( u, v ) m 2 s ( u,v ) ∈ Ω m ≥ 1 ( u,v ) ∈ Ω | ( u,v ) | = m is used for expressing the mean value E n [ C ] of C on Ω n , with Ω n := { ( u, v ) ∈ Ω; ℓ ( | ( u, v ) | ) = n } The mean value E n [ C ] is expressed with coefficients of S C ( s ) as � 1 E n [ C ] = c m . | Ω n | m | ℓ ( m )= n

  54. The mixed series of cost C Now, two parameters s and w : s marks the size, and w marks the cost,

  55. The mixed series of cost C Now, two parameters s and w : s marks the size, and w marks the cost, � � 1 c m ( w ) S C ( s, w ) := | ( u, v ) | s exp[ wC ( u, v )] = m s ( u,v ) ∈ Ω m ≥ 1 � with c m ( w ) := exp[ wC ( u, v )] ( u,v ) ∈ Ω | ( u,v )= m

  56. The mixed series of cost C Now, two parameters s and w : s marks the size, and w marks the cost, � � 1 c m ( w ) S C ( s, w ) := | ( u, v ) | s exp[ wC ( u, v )] = m s ( u,v ) ∈ Ω m ≥ 1 � with c m ( w ) := exp[ wC ( u, v )] ( u,v ) ∈ Ω | ( u,v )= m The moment generating function E n [exp( wC )] of C on Ω n with Ω n := { ( u, v ) ∈ Ω; ℓ ( | ( u, v ) | ) = n } is expressed with coefficients of S C ( s, w )

  57. The mixed series of cost C Now, two parameters s and w : s marks the size, and w marks the cost, � � 1 c m ( w ) S C ( s, w ) := | ( u, v ) | s exp[ wC ( u, v )] = m s ( u,v ) ∈ Ω m ≥ 1 � with c m ( w ) := exp[ wC ( u, v )] ( u,v ) ∈ Ω | ( u,v )= m The moment generating function E n [exp( wC )] of C on Ω n with Ω n := { ( u, v ) ∈ Ω; ℓ ( | ( u, v ) | ) = n } is expressed with coefficients of S C ( s, w ) � � 1 E n [exp( wC )] = c m ( w ) with | Ω n | = c m (0) | Ω n | m | ℓ ( m )= n m | ℓ ( m )= n

  58. For the asymptotics of E n [ C ] or E n [exp( wC )] ,

  59. For the asymptotics of E n [ C ] or E n [exp( wC )] , we need a precise knowledge about the position and the nature of singularities of S C ( s ) or S C ( S, w ) .

  60. For the asymptotics of E n [ C ] or E n [exp( wC )] , we need a precise knowledge about the position and the nature of singularities of S C ( s ) or S C ( S, w ) . There exist alternative expressions for S C ( s ) , or S C ( s, w ) from which the position and the nature of singularities become apparent.

  61. For the asymptotics of E n [ C ] or E n [exp( wC )] , we need a precise knowledge about the position and the nature of singularities of S C ( s ) or S C ( S, w ) . There exist alternative expressions for S C ( s ) , or S C ( s, w ) from which the position and the nature of singularities become apparent. These alternative expressions will involve the (various) transfer operators.

  62. Relations between the generating functions and the transfer operators (I).

  63. Relations between the generating functions and the transfer operators (I). A Euclid Algorithm builds a bijection between Ω and H ⋆ : u ( u, v ) �→ h with v = h (0) . Then, due to the fact that branches are LFT’s of determinant 1, 1 v = | h ′ (0) | 1 / 2 , C ( u, v ) = c ( h ) .

  64. Relations between the generating functions and the transfer operators (I). A Euclid Algorithm builds a bijection between Ω and H ⋆ : u ( u, v ) �→ h with v = h (0) . Then, due to the fact that branches are LFT’s of determinant 1, 1 v = | h ′ (0) | 1 / 2 , C ( u, v ) = c ( h ) . � � 1 | h ′ (0) | s exp[ wc ( h )] , Then: S C (2 s, w ) := v 2 s exp[ wC ( u, v )] = h ∈H ⋆ ( u,v ) ∈ Ω admits an alternative expression with the quasi inverse ( I − H s,w ) − 1 of the transfer operator H s,w , S C (2 s, w ) = ( I − H s,w ) − 1 [1](0)

  65. Relations between the generating functions and the transfer operators (I). A Euclid Algorithm builds a bijection between Ω and H ⋆ : u ( u, v ) �→ h with v = h (0) . Then, due to the fact that branches are LFT’s of determinant 1, 1 v = | h ′ (0) | 1 / 2 , C ( u, v ) = c ( h ) . � � 1 | h ′ (0) | s exp[ wc ( h )] , Then: S C (2 s, w ) := v 2 s exp[ wC ( u, v )] = h ∈H ⋆ ( u,v ) ∈ Ω admits an alternative expression with the quasi inverse ( I − H s,w ) − 1 of the transfer operator H s,w , S C (2 s, w ) = ( I − H s,w ) − 1 [1](0) � | h ′ ( x ) | s · exp[ wc ( h )] · f ◦ h ( x ) Remind : H s,w [ f ]( x ) := h ∈H

  66. Relation between the transfer operator and the Dirichlet series. � � � | ( u, v ) | 2 s = ∂ C ( u, v ) � Since: S C (2 s ) := ∂wS C (2 s, w ) � w =0 ( u,v ) ∈ Ω there is a relation S C ( s ) = ( I − H s ) − 1 ◦ H [ c ] s ◦ ( I − H s ) − 1 [1]( η ) between S C ( s ) and two transfer operators:

  67. Relation between the transfer operator and the Dirichlet series. � � � | ( u, v ) | 2 s = ∂ C ( u, v ) � Since: S C (2 s ) := ∂wS C (2 s, w ) � w =0 ( u,v ) ∈ Ω there is a relation S C ( s ) = ( I − H s ) − 1 ◦ H [ c ] s ◦ ( I − H s ) − 1 [1]( η ) between S C ( s ) and two transfer operators: the weighted one � | h ′ ( x ) | s · c ( h ) · f ◦ h ( x ) H [ c ] s [ f ]( x ) = h ∈H and the quasi-inverse ( I − H s ) − 1 of the plain transfer operator H s , � | h ′ ( x ) | s · f ◦ h ( x ) . H s [ f ]( x ) := h ∈H

  68. In both cases,

  69. In both cases, singularities of s �→ ( I − H s ) − 1 or s �→ ( I − H s,w ) − 1

  70. In both cases, singularities of s �→ ( I − H s ) − 1 or s �→ ( I − H s,w ) − 1 are related to spectral properties of H s or H s,w

  71. In both cases, singularities of s �→ ( I − H s ) − 1 or s �→ ( I − H s,w ) − 1 are related to spectral properties of H s or H s,w ..... on a convenient functional space ..

  72. In both cases, singularities of s �→ ( I − H s ) − 1 or s �→ ( I − H s,w ) − 1 are related to spectral properties of H s or H s,w ..... on a convenient functional space .. .... which depends on the dynamical system (and thus the algorithm )...

  73. Average-case analysis: Expected spectral properties of H s ( i ) UDE and SG for s near 1: Spectral Gap UDE – Unique dominant eigenvalue λ ( s, w ) with λ (1 , 0) = 1 SG – Existence of a spectral gap Unique Dominant ( ii ) Aperiodicity: On the line ℜ s = 1 , s � = 1 , Eigenvalue the spectral radius of H s is < 1 On which functional space? The answer depends on the Dynamical System, and thus on the algorithm....

  74. The functional spaces where the triple UDE + SG + Aperiodicity holds. Algs Geometry Convenient of branches Functional space C 1 ( I ) Good Class Contracting (Standard, Centered) Binary Not contracting The Hardy space H ( D ) Contracting Various spaces: C 0 ( J ) , C 1 ( J ) LSB on average H¨ older H α ( J ) Slow Class An indifferent point Induction + C 1 ( I ) (Subtractive, By-Excess) In each case, the aperiodicity holds since the branches have not “all the same form”.

  75. The triple UDE + SG + Aperiodicity entails good properties for ( I − H s ) − 1 , sufficient for applying Tauberian Theorems to S C ( s ) . s = 1 is the only pole on the line ℜ s = 1 Expansion near the pole s = 1 a ( I − H s ) − 1 ∼ s − 1 s=1 Half–plane of convergence ℜ s > 1 No hypothesis needed on the half–plane ℜ s < 1 .

  76. Second direction: a distribution study. Uniform Extraction of coefficients via the Perron Formula � D + i ∞ � � � F ( s, w ) N s +1 a m ( w ) 1 For F ( s, w ) := , a q = s ( s + 1) ds m s 2 iπ D − i ∞ m ≥ 1 m ≤ N q ≤ m . . . A first step for estimating E N [exp( wC )] . . . uniformly in w . Perron’s formula relates the MGF E N [exp( wC )] to � D + i ∞ S (2 s, w ) N 2 s +1 1 s (2 s + 1) ds 2 iπ D − i ∞ � D + i ∞ ( I − H s,w ) − 1 [1](0) N 2 s +1 1 = s (2 s + 1) ds 2 iπ D − i ∞ What can be expected on s �→ ( I − H s,w ) − 1 for dealing “uniformly” with the Perron Formula?

  77. Dynamical analysis of a Euclidean Algorithm.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend