Solving fixed-point equations on -continuous semirings Javier - - PowerPoint PPT Presentation

solving fixed point equations on continuous semirings
SMART_READER_LITE
LIVE PREVIEW

Solving fixed-point equations on -continuous semirings Javier - - PowerPoint PPT Presentation

Solving fixed-point equations on -continuous semirings Javier Esparza Technische Universit at M unchen Joint work with Stefan Kiefer and Michael Luttenberger From programs to flowgraphs proc X 2 proc X 3 proc X 1 c f a d X 2 X 2 X


slide-1
SLIDE 1

Solving fixed-point equations on ω-continuous semirings

Javier Esparza

Technische Universit¨ at M¨ unchen

Joint work with Stefan Kiefer and Michael Luttenberger

slide-2
SLIDE 2

From programs to flowgraphs

proc X2 X2 X3 d c X2 X1 e proc X3 X1 X3 f g proc X1 X1 X2 a b

slide-3
SLIDE 3

From flowgraphs to equations

A syntactic transformation. X1 = a · X1 · X2 + b X2 = c · X2 · X3 + d · X2 · X1 + e X3 = f · X1 · X3 + g But how should the equations be interpreted mathematically?

− What kind of objects are a, . . . , g ? − What kind of operations are sum and product ?

It depends. Different interpretations lead to different semantics

slide-4
SLIDE 4

From flowgraphs to equations

A syntactic transformation. X1 = a · X1 · X2 + b X2 = c · X2 · X3 + d · X2 · X1 + e X3 = f · X1 · X3 + g But how should the equations be interpreted mathematically?

− What kind of objects are a, . . . , g ? − What kind of operations are sum and product ?

It depends. Different interpretations lead to different semantics.

slide-5
SLIDE 5

Input/output relational semantics

Interpret a, . . . , g as assignments or guards over a set of program variables V with set of valuations Val. R(Xi) = (v, v′) ∈ Val × Val such that Xi started at v, may terminate at v′.

slide-6
SLIDE 6

Language semantics

Interpret the atomic actions as letters of an alphabet A. L(Xi) = words w ∈ A∗ such that Xi can execute w and terminate.

slide-7
SLIDE 7

Language semantics

Interpret the atomic actions as letters of an alphabet A. L(Xi) = words w ∈ A∗ such that Xi can execute w and terminate. ( L(X1), L(X2), L(X3) ) is the least solution of the equations under the following interpretation:

  • Universe: 2A∗ (languages over A).
  • a, . . . , g are the singleton languages {a}, . . . , {g}.
  • sum is union of languages, product is concatenation:

L1 · L2 = {w1w2 | w1 ∈ L1 ∧ w2 ∈ l2}

slide-8
SLIDE 8

Probabilistic termination semantics

Interpret a, . . . , g as probabilities. T(Xi) = probability that Xi terminates.

slide-9
SLIDE 9

Probabilistic termination semantics

Interpret a, . . . , g as probabilities. T(Xi) = probability that Xi terminates. ( T(X1), T(X2), T(X3) ) is the least solution of the equations under the following interpretation:

  • Universe: R+
  • a, . . . , g are the probabilities of taking the transitions
  • sum and product are addition and multiplication of reals
slide-10
SLIDE 10

ω-continuous semirings

Underlying mathematical structure: ω-continuous semirings Algebra (C, +, ·, 0, 1) – (C, +, 0) is a commutative monoid – · distributes over + – (C, ·, 1) is a monoid – 0 · a = a · 0 = 0 – a ⊑ a + b is a partial order – ⊑-chains have limits System of (w.l.o.g. quadratic) equations X = f(X) where

  • X = (X1, . . . , Xn) vector of variables,
  • f(X) = (f1(X), . . . , fn(X)) vector of terms over C ∪ {X1, . . . , Xn}.

Notice: the fi are polynomials

slide-11
SLIDE 11

Kleenean program analysis

Theorem [Kleene]: The least solution µf is the supremum of {ki}i≥0 , where k0 = f(0) ki+1 = f(ki) Basic algorithm for computing µf: compute k0, k1, k2, . . . until either ki = ki+1 or the approximation is considered adequate.

slide-12
SLIDE 12

Kleenean program analysis is slow

Set interpretations: Kleene iteration never terminates if µf is an infinite set.

  • X = a · X + b

µf = a∗b

  • Kleene approximants are finite sets: ki = (ǫ + a + . . . + ai)b

Probabilistic interpretation: convergence can be very slow [EY STACS05].

  • X = 1

2 X2 + 1 2 µf = 1 = 0.99999 . . .

  • “Logarithmic convergence”: k iterations to get log k bits of accuracy.

kn ≤ 1 − 1 n + 1 k2000 = 0.9990

slide-13
SLIDE 13

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-14
SLIDE 14

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-15
SLIDE 15

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-16
SLIDE 16

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-17
SLIDE 17

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-18
SLIDE 18

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-19
SLIDE 19

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-20
SLIDE 20

Kleene Iteration for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-21
SLIDE 21

Newton’s Method for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-22
SLIDE 22

Newton’s Method for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-23
SLIDE 23

Newton’s Method for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-24
SLIDE 24

Newton’s Method for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-25
SLIDE 25

Newton’s Method for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-26
SLIDE 26

Newton’s Method for X = f(X) (univariate case)

0.2 0.2 0.4 0.4 0.6 0.6 0.8 0.8 1 1 1.2 1.2

µf f(X)

slide-27
SLIDE 27

Evaluation of Newton’s method

Newton’s Method is usually very efficient

  • often exponential convergence

. . . but not robust:

  • may not converge, converge only locally (in some neighborhood of the

least fixed-point), or converge very slowly.

slide-28
SLIDE 28

A puzzling mismatch

Program analysis:

  • General domain: arbitrary ω-continuous semirings
  • Kleene Iteration is robust and generally applicable
  • . . . but converges slowly.

Numerical mathematics:

  • Particular domain: the real field
  • Newton’s Method converges fast
  • . . . but is not robust
slide-29
SLIDE 29

Our main result

  • Newton’s Method can be defined for arbitrary ω-continuous

semirings, and becomes as robust as Kleene’s method.

slide-30
SLIDE 30

Mathematical formulation of Newton’s Method

Let ν be some approximation of µf. (We start with ν = f(0).)

  • Compute the function Tν(X) describing the tangent to f(X) at ν
  • Solve X = Tν(X)

(instead of X = f(X)), and take the solution as the new approximation Elementary analysis: Tν(X) = Df ν(X) + f(ν) − ν where Df x0(X) is the differential of f at x0 So: ν0 = νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + f(νi) − νi

slide-31
SLIDE 31

Generalizing Newton’s method

Key point: generalize X = Df ν(X) + f(ν) − ν In an arbitrary ω-continuous semiring

  • neither the differential Df ν(X), nor
  • the difference f(ν) − ν

are defined.

slide-32
SLIDE 32

Differentials in semirings

Standard solution: take the algebraic definition Df ν(X) =

                    

if f(X) = c X if f(X) = X Dgν(X) + Dhν(X) if f(X) = g(X) + h(X) Dgν(X) · h(ν) + g(ν) · Dhν(X) if f(X) = g(X) · h(X)

  • i∈I

Df ν(X) if f(X) =

  • i∈I

fi(X).

slide-33
SLIDE 33

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-34
SLIDE 34

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-35
SLIDE 35

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-36
SLIDE 36

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-37
SLIDE 37

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-38
SLIDE 38

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-39
SLIDE 39

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-40
SLIDE 40

The difference f(νi) − νi

Solution: Replace f(νi) − νi by any δi such that f(νi) = νi + δi νi+1 = νi + ∆i where ∆i solution of X = Df νi(X) + δi But does δi always exist? Proposition: Yes. But νi+i depends on your choice of δi ! Theorem: No, it doesn’t. Can’t you give a closed form for νi+1 ? Proposition: Yes. The least solution of X = Df νi(X) + δi is Df ∗

νi(δi):= ∞

  • j=0

Df j

νi(δi)

and so: νi+1 = νi + Df ∗

νi(δi)

slide-41
SLIDE 41

Theorem [EKL DLT07]: Let X = f(X) be an equation over an arbitrary ω-continuous semiring. The sequence ν0 = f(0) νi+1 = νi + Df ∗

νi(δi)

where δi satisfies f(νi) = νi + δi exists, is unique and satisfies ki ⊑ νi ⊑ µf for every i ≥ 0.

slide-42
SLIDE 42

Multivariate case

Systems of equations:

  • νi, ∆i, δi become vectors (elements of Sn)
  • The differential becomes a function Sn → Sn

Geometric intuition: Df νi(X1, . . . , Xn) is the hyperplane tangent to f at the (n-dimensional) point νi

slide-43
SLIDE 43

Derivation trees I

An equation X = f(X) induces a context-free grammar G : X → f(X) Examples: X = 0.7X2 + 0.3 induces X → 0.7 X X | 0.3 X = 0.2XY + 0.8 Y = 0.7XY + 0.3 induces X → 0.2 X Y | 0.8 Y → 0.7 X Y | 0.3 (Actually one grammar for each variable, differing only in the axiom.)

slide-44
SLIDE 44

Derivation trees II

Assign to a derivation tree t its yield Y(t): Y(t) = (ordered) product of t’s leaves Assign to a set T

  • f derivation trees its yield

Y(T) Y(T) =

  • t∈T

Y(t) Example: X → 0.7 X X | 0.3

slide-45
SLIDE 45

Derivation trees III

Proposition: Let D be the set of all derivation trees of G. Then µf = Y(D) µf D yield X = f(X) X → f(X)

slide-46
SLIDE 46

Approximants as yields: Kleene

Proposition: The i-th Kleene approximant ki is the yield of all derivation trees of depth at most i. ki Trees of depth ≤ i X = f(X) X → f(X) yield

slide-47
SLIDE 47

Approximants as yields: Newton

Main Theorem: The i-th Newton approximant νi is the yield of all derivation trees of dimension at most i. Trees of dimension ≤ i X = f(X) X → f(X) yield νi

slide-48
SLIDE 48

Understanding dimension I

A derivation tree has dimension k if at least one of its derivations X ⇒ w1 ⇒ w2 . . . ⇒ wn ⇒ w satisfies that all of w1, . . . , wn contain at most k occurrences of non-terminals (and at least one of them contains k occurrences).

X X X X a X b b b a

X ⇒ aXX ⇒ abX ⇒ abaXX ⇒ ababX ⇒ abaaa

slide-49
SLIDE 49

Understanding dimension II

A derivation tree has dimension 0 if it has one node. A derivation tree has dimension k > 0 if it consists of a spine with subtrees

  • f dimension at most k − 1 (and at least one subtree of dimension k − 1).
slide-50
SLIDE 50

Understanding dimension II

A derivation tree has dimension 0 if it has one node. A derivation tree has dimension k > 0 if it consists of a spine with subtrees

  • f dimension at most k − 1 (and at least one subtree of dimension k − 1).

k−1 k−1 k−1 k−1 k−1 k−1 k−1 k−1

slide-51
SLIDE 51

The proof

Theorem [EKL DLT07]: Let X = f(X) be an equation over an arbitrary ω-continuous semiring. The Newton sequence {νi}i≥0 is unique and satisfies ki ⊑ νi ⊑ µf for every i ≥ 0. Proof: Uniqueness: follows from tree characterization. ki ⊑ νi: trees of depth i have dimension at most i. νi ⊑ µf: the yield of all trees of dimension at most i is smaller than or equal to the yield of all trees.

slide-52
SLIDE 52

The proof

Theorem [EKL DLT07]: Let X = f(X) be an equation over an arbitrary ω-continuous semiring. The Newton sequence {νi}i≥0 is unique and satisfies ki ⊑ νi ⊑ µf for every i ≥ 0. Proof: Uniqueness: follows from tree characterization. ki ⊑ νi: trees of depth i have dimension at most i. νi ⊑ µf: the yield of all trees of dimension at most i is smaller than or equal to the yield of all trees.

slide-53
SLIDE 53

The proof

Theorem [EKL DLT07]: Let X = f(X) be an equation over an arbitrary ω-continuous semiring. The Newton sequence {νi}i≥0 is unique and satisfies ki ⊑ νi ⊑ µf for every i ≥ 0. Proof: Uniqueness: follows from tree characterization. ki ⊑ νi: trees of depth i have dimension at most i. νi ⊑ µf: the yield of all trees of dimension at most i is smaller than or equal to the yield of all trees.

slide-54
SLIDE 54

The proof

Theorem [EKL DLT07]: Let X = f(X) be an equation over an arbitrary ω-continuous semiring. The Newton sequence {νi}i≥0 is unique and satisfies ki ⊑ νi ⊑ µf for every i ≥ 0. Proof: Uniqueness: follows from tree characterization. ki ⊑ νi: trees of depth i have dimension at most i. νi ⊑ µf: the yield of all trees of dimension at most i is smaller than or equal to the yield of all trees.

slide-55
SLIDE 55

Idempotent semirings: derivation tree analysis

Idempotent semiring: a + a = a Technique for computing µf algebraically: (1) Identify a set T of derivation trees such that Y(T) can be computed algebraically. (2) Show that Y(t) ⊑ Y(T) holds for every derivation tree t. µf = Y(D) (proposition above) =

  • t∈D

Y(t) (definition of yield) ⊆

  • t∈D

Y(T) (Y(t) ⊆ Y(T)) = Y(T) (idempotence)

slide-56
SLIDE 56

Idempotent semirings: derivation tree analysis

Idempotent semiring: a + a = a Technique for computing µf algebraically: (1) Identify a set T of derivation trees such that Y(T) can be computed algebraically. (2) Show that Y(t) ⊑ Y(T) holds for every derivation tree t. µf = Y(D) (proposition above) =

  • t∈D

Y(t) (definition of yield) ⊆

  • t∈D

Y(T) (Y(t) ⊆ Y(T)) = Y(T) (idempotence)

slide-57
SLIDE 57

Commutative idempotent semirings

Theorem [Hopkins-Kozen LICS ’99]: The least fixed point of a system X = f(X) of n equations over an ω-continuous idempotent and commutative semiring is reached by the sequence ν0 = f(0) νi+1 = J(νi)∗ · f(νi) after at most O(3n) iterations. Theorem [EKL STACS’07]: This is exactly Newton’s sequence. The fixed point is reached after at most n iterations, i.e. µf = νn.

slide-58
SLIDE 58

Commutative idempotent semirings

Theorem [Hopkins-Kozen LICS ’99]: The least fixed point of a system X = f(X) of n equations over an ω-continuous idempotent and commutative semiring is reached by the sequence ν0 = f(0) νi+1 = J(νi)∗ · f(νi) after at most O(3n) iterations. Theorem [EKL STACS’07]: This is exactly Newton’s sequence. The fixed point is reached after at most n iterations, i.e. µf = νn.

slide-59
SLIDE 59

Commutative idempotent semirings

Theorem [Hopkins-Kozen LICS ’99]: The least fixed point of a system X = f(X) of n equations over an ω-continuous idempotent and commutative semiring is reached by the sequence ν0 = f(0) νi+1 = J(νi)∗ · f(νi) after at most O(3n) iterations. Theorem [EKL STACS’07]: This is exactly Newton’s sequence. The fixed point is reached after at most n iterations, i.e. µf = νn.

slide-60
SLIDE 60

Commutative idempotent semirings

Theorem [Hopkins-Kozen LICS ’99]: The least fixed point of a system X = f(X) of n equations over an ω-continuous idempotent and commutative semiring is reached by the sequence ν0 = f(0) νi+1 = J(νi)∗ · f(νi) after at most O(3n) iterations. Theorem [EKL STACS’07]: This is exactly Newton’s sequence. The fixed point is reached after at most n iterations, i.e. µf = νn.

slide-61
SLIDE 61

Proof with derivation tree analysis

Lemma: Let X = f(X) be a system of n equations over an ω-continuous idempotent and commutative semiring. For every derivation tree t there is another tree t′ of dimension at most n such that Y(t) = Y(t′). Theorem: µf = νn. Proof: Let Tn be the set of trees of dimension n. Then Y(Tn) = νn ⊑ µf. µf =

  • t∈D

Y(t) =

  • t∈D

Y(t′) (definition of yield, Y(t) = Y(t′)) =

  • t∈Tn

Y(t′) (t′ ∈ Tn, idempotence) ⊑ Y(Tn) = νn

slide-62
SLIDE 62

An example

The Newton sequence terminates for all idempotent and commutative analyses, the Kleene sequence does not. X = a · X · X + b f ′(X) = a · X + a · X = a · X For one equation: µf = ν1 = f ′(ν0)∗ · ν0 We obtain: ν0 = b ν1 = (ab)∗b

slide-63
SLIDE 63

Other results proved by derivation tree analysis

Star-distributive commutative semirings: (a + b)∗ = a∗ + b∗. µf = Df ∗

f n(0)(f(0)) · f(0)

(improving the complexity of an algorithm for computing throughput of context free grammars due to Caucal et al.) Lossy semirings: a ⊑ 1 for every a = 0. µf = Df ∗

f n(0)(f(0)) · f(0)

(algebraic version of a result by Courcelle)

slide-64
SLIDE 64

Having fun: Secondary structure of RNA

(image by Bassi, Costa, Michel; www.cgm.cnrs-gif.fr/michel/)

slide-65
SLIDE 65

An stochastic context-free grammar

[Knudsen, Hein 99]: Model the distribution of secondary structures as the derivation trees of the following stochastic context-free grammar: L 0.869 − − − − − → CL L 0.131 − − − − − → C S 0.788 − − − − − → pSp S 0.212 − − − − − → CL C 0.895 − − − − − → s C 0.105 − − − − − → pSp Graphical interpretation:

s s s s s s s p−p s s s s s s s s s s s s s s s s s ss p−p p−p p−p

sssppsssspsssssspssppsssspssssspss

slide-66
SLIDE 66

Visualizing the index of a derivation

Dimension = depth of the red tree + 1

slide-67
SLIDE 67

Visualizing the index of a derivation

Dimension = depth of the red tree + 1

slide-68
SLIDE 68

Visualizing the index of a derivation

Dimension = depth of the red tree + 1

slide-69
SLIDE 69

Grammar leads to two equation systems: L = C · L + C S = p · S · p + C · L C = s + p · S · p ν0(L) =

  • der. of dim. ≤ 1

ν1(L) =

  • der. of dim. ≤ 2

ν2(L) =

  • der. of dim. ≤ 3

ν3(L) =

  • der. of dim. ≤ 4

ν4(L) =

  • der. of dim. ≤ 5

ν5(L) =

  • der. of dim. ≤ 6

ˆ L = 0.869 · ˆ C · ˆ L + 0.131 · ˆ C ˆ S = 0.788 · ˆ S + 0.212 · ˆ C · ˆ L ˆ C = 0.895 + 0.105 · ˆ S ˆ ν0(L) = 0.5585 ˆ ν1(L) = 0.8050 ˆ ν2(L) = 0.9250 ˆ ν3(L) = 0.9789 ˆ ν4(L) = 0.9972 ˆ ν5(L) = 0.9999

slide-70
SLIDE 70

Conclusions Newton did it all