The fixed point theory of complexity Yiannis N. Moschovakis UCLA - - PowerPoint PPT Presentation

the fixed point theory of complexity
SMART_READER_LITE
LIVE PREVIEW

The fixed point theory of complexity Yiannis N. Moschovakis UCLA - - PowerPoint PPT Presentation

The fixed point theory of complexity Yiannis N. Moschovakis UCLA and University of Athens a commercial for Abstract Recursion and Intrinsic Complexity Published by CUP, ASL LNL Series # 48 Posted on my homepage Panhellenic Logic Symposium 12,


slide-1
SLIDE 1

The fixed point theory of complexity

Yiannis N. Moschovakis UCLA and University of Athens a commercial for Abstract Recursion and Intrinsic Complexity Published by CUP, ASL LNL Series # 48 Posted on my homepage Panhellenic Logic Symposium 12, June 26 – 30, 2019, Anogeia

slide-2
SLIDE 2

The fixed point theory of complexity

Yiannis N. Moschovakis UCLA and University of Athens a commercial for Abstract Recursion and Intrinsic Complexity Published by CUP, ASL LNL Series # 48 Posted on my homepage Panhellenic Logic Symposium 12, June 26 – 30, 2019, Anogeia

slide-3
SLIDE 3

Denotational semantics for programming languages

I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming

language) its denotation, the object den(E) computed by E, typically a function or relation of some sort

I The key mathematical tools it uses are fixed point theorems in

various complete partially ordered sets (domains)

I It is important because it provides a precise, mathematical

criterion of correctness for programs, which should compute what we wanted them to compute

I It has developed into a rich, intricate mathematical theory,

not always easy to apply in specific cases

⋆ den(E) gives no information about the complexity of the

algorithm expressed by E

Yiannis N. Moschovakis: The fixed point theory of complexity 1/10

slide-4
SLIDE 4

Denotational semantics for programming languages

I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming

language) its denotation, the object den(E) computed by E, typically a function or relation of some sort

I The key mathematical tools it uses are fixed point theorems in

various complete partially ordered sets (domains)

I It is important because it provides a precise, mathematical

criterion of correctness for programs, which should compute what we wanted them to compute

I It has developed into a rich, intricate mathematical theory,

not always easy to apply in specific cases

⋆ den(E) gives no information about the complexity of the

algorithm expressed by E

Yiannis N. Moschovakis: The fixed point theory of complexity 1/10

slide-5
SLIDE 5

Denotational semantics for programming languages

I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming

language) its denotation, the object den(E) computed by E, typically a function or relation of some sort

I The key mathematical tools it uses are fixed point theorems in

various complete partially ordered sets (domains)

I It is important because it provides a precise, mathematical

criterion of correctness for programs, which should compute what we wanted them to compute

I It has developed into a rich, intricate mathematical theory,

not always easy to apply in specific cases

⋆ den(E) gives no information about the complexity of the

algorithm expressed by E

Yiannis N. Moschovakis: The fixed point theory of complexity 1/10

slide-6
SLIDE 6

Denotational semantics for programming languages

I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming

language) its denotation, the object den(E) computed by E, typically a function or relation of some sort

I The key mathematical tools it uses are fixed point theorems in

various complete partially ordered sets (domains)

I It is important because it provides a precise, mathematical

criterion of correctness for programs, which should compute what we wanted them to compute

I It has developed into a rich, intricate mathematical theory,

not always easy to apply in specific cases

⋆ den(E) gives no information about the complexity of the

algorithm expressed by E

Yiannis N. Moschovakis: The fixed point theory of complexity 1/10

slide-7
SLIDE 7

Denotational semantics for programming languages

I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming

language) its denotation, the object den(E) computed by E, typically a function or relation of some sort

I The key mathematical tools it uses are fixed point theorems in

various complete partially ordered sets (domains)

I It is important because it provides a precise, mathematical

criterion of correctness for programs, which should compute what we wanted them to compute

I It has developed into a rich, intricate mathematical theory,

not always easy to apply in specific cases

⋆ den(E) gives no information about the complexity of the

algorithm expressed by E

Yiannis N. Moschovakis: The fixed point theory of complexity 1/10

slide-8
SLIDE 8

Denotational semantics for programming languages

I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming

language) its denotation, the object den(E) computed by E, typically a function or relation of some sort

I The key mathematical tools it uses are fixed point theorems in

various complete partially ordered sets (domains)

I It is important because it provides a precise, mathematical

criterion of correctness for programs, which should compute what we wanted them to compute

I It has developed into a rich, intricate mathematical theory,

not always easy to apply in specific cases

⋆ den(E) gives no information about the complexity of the

algorithm expressed by E

Yiannis N. Moschovakis: The fixed point theory of complexity 1/10

slide-9
SLIDE 9

Denotational semantics for programming languages

I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming

language) its denotation, the object den(E) computed by E, typically a function or relation of some sort

I The key mathematical tools it uses are fixed point theorems in

various complete partially ordered sets (domains)

I It is important because it provides a precise, mathematical

criterion of correctness for programs, which should compute what we wanted them to compute

I It has developed into a rich, intricate mathematical theory,

not always easy to apply in specific cases

⋆ den(E) gives no information about the complexity of the

algorithm expressed by E

Yiannis N. Moschovakis: The fixed point theory of complexity 1/10

slide-10
SLIDE 10

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-11
SLIDE 11

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-12
SLIDE 12

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-13
SLIDE 13

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-14
SLIDE 14

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-15
SLIDE 15

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-16
SLIDE 16

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-17
SLIDE 17

The Euclidean algorithm for coprimeness

  • n N = {0, 1, . . .}, with division

gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y

I Def. callsε(rem)(x, y) = the number of divisions

(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1

⋆ Is the Euclidean worst-case optimal from its primitives?

Yiannis N. Moschovakis: The fixed point theory of complexity 2/10

slide-18
SLIDE 18

Intensional properties of programs

I To extract from a program E many of its intensional

properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call

I We consider only programs which compute (partial) functions

and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ

I For the Euclidean: Aε = (N, eq0, eq1, rem)

(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where

  • p(x, y) = if eq0(y) then x else p(y, rem(x, y))
  • Yiannis N. Moschovakis: The fixed point theory of complexity

3/10

slide-19
SLIDE 19

Intensional properties of programs

I To extract from a program E many of its intensional

properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call

I We consider only programs which compute (partial) functions

and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ

I For the Euclidean: Aε = (N, eq0, eq1, rem)

(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where

  • p(x, y) = if eq0(y) then x else p(y, rem(x, y))
  • Yiannis N. Moschovakis: The fixed point theory of complexity

3/10

slide-20
SLIDE 20

Intensional properties of programs

I To extract from a program E many of its intensional

properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call

I We consider only programs which compute (partial) functions

and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ

I For the Euclidean: Aε = (N, eq0, eq1, rem)

(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where

  • p(x, y) = if eq0(y) then x else p(y, rem(x, y))
  • Yiannis N. Moschovakis: The fixed point theory of complexity

3/10

slide-21
SLIDE 21

Intensional properties of programs

I To extract from a program E many of its intensional

properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call

I We consider only programs which compute (partial) functions

and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ

I For the Euclidean: Aε = (N, eq0, eq1, rem)

(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where

  • p(x, y) = if eq0(y) then x else p(y, rem(x, y))
  • Yiannis N. Moschovakis: The fixed point theory of complexity

3/10

slide-22
SLIDE 22

Intensional properties of programs

I To extract from a program E many of its intensional

properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call

I We consider only programs which compute (partial) functions

and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ

I For the Euclidean: Aε = (N, eq0, eq1, rem)

(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where

  • p(x, y) = if eq0(y) then x else p(y, rem(x, y))
  • Yiannis N. Moschovakis: The fixed point theory of complexity

3/10

slide-23
SLIDE 23

Intensional properties of programs

I To extract from a program E many of its intensional

properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call

I We consider only programs which compute (partial) functions

and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ

I For the Euclidean: Aε = (N, eq0, eq1, rem)

(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where

  • p(x, y) = if eq0(y) then x else p(y, rem(x, y))
  • Yiannis N. Moschovakis: The fixed point theory of complexity

3/10

slide-24
SLIDE 24

Intensional properties of programs

I To extract from a program E many of its intensional

properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call

I We consider only programs which compute (partial) functions

and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ

I For the Euclidean: Aε = (N, eq0, eq1, rem)

(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where

  • p(x, y) = if eq0(y) then x else p(y, rem(x, y))
  • Yiannis N. Moschovakis: The fixed point theory of complexity

3/10

slide-25
SLIDE 25

Recursive programs in the vocabulary Φ

I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):

E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n

i

(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n

i

is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;

I Recursive Φ-programs:

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • where each Ei is a Φ-term whose individual variables are in the list
  • xi (with

x0 = x) and its recursive variables are among p1, . . . , pK, . .

I Semantics: The body of E defines a system of recursive equations

p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)

Yiannis N. Moschovakis: The fixed point theory of complexity 4/10

slide-26
SLIDE 26

Recursive programs in the vocabulary Φ

I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):

E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n

i

(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n

i

is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;

I Recursive Φ-programs:

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • where each Ei is a Φ-term whose individual variables are in the list
  • xi (with

x0 = x) and its recursive variables are among p1, . . . , pK, . .

I Semantics: The body of E defines a system of recursive equations

p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)

Yiannis N. Moschovakis: The fixed point theory of complexity 4/10

slide-27
SLIDE 27

Recursive programs in the vocabulary Φ

I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):

E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n

i

(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n

i

is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;

I Recursive Φ-programs:

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • where each Ei is a Φ-term whose individual variables are in the list
  • xi (with

x0 = x) and its recursive variables are among p1, . . . , pK, . .

I Semantics: The body of E defines a system of recursive equations

p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)

Yiannis N. Moschovakis: The fixed point theory of complexity 4/10

slide-28
SLIDE 28

Recursive programs in the vocabulary Φ

I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):

E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n

i

(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n

i

is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;

I Recursive Φ-programs:

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • where each Ei is a Φ-term whose individual variables are in the list
  • xi (with

x0 = x) and its recursive variables are among p1, . . . , pK, . .

I Semantics: The body of E defines a system of recursive equations

p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)

Yiannis N. Moschovakis: The fixed point theory of complexity 4/10

slide-29
SLIDE 29

Recursive programs in the vocabulary Φ

I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):

E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n

i

(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n

i

is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;

I Recursive Φ-programs:

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • where each Ei is a Φ-term whose individual variables are in the list
  • xi (with

x0 = x) and its recursive variables are among p1, . . . , pK, . .

I Semantics: The body of E defines a system of recursive equations

p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)

Yiannis N. Moschovakis: The fixed point theory of complexity 4/10

slide-30
SLIDE 30

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-31
SLIDE 31

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-32
SLIDE 32

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-33
SLIDE 33

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-34
SLIDE 34

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-35
SLIDE 35

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-36
SLIDE 36

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-37
SLIDE 37

⋆ The tree-depth complexity in A = (A, Φ)

E ≡ E0( x) where

  • p1(

x1) = E1( x1), . . . , pK( xK) = EK( xK)

  • I The convergent parts: M ∈ Conv(A, E) if M ≡ N{

y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =

  • max{D(M0), D(M1)} + 1,

if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK

Yiannis N. Moschovakis: The fixed point theory of complexity 5/10

slide-38
SLIDE 38

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-39
SLIDE 39

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-40
SLIDE 40

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-41
SLIDE 41

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-42
SLIDE 42

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-43
SLIDE 43

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-44
SLIDE 44

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-45
SLIDE 45

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-46
SLIDE 46

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-47
SLIDE 47

The sequential logical complexity Ls(M) (time)

By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =

  • Ls(M0) + Ls(M1) + 1

if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA

E(

x)↓)

I timeE(

x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L

Yiannis N. Moschovakis: The fixed point theory of complexity 6/10

slide-48
SLIDE 48

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-49
SLIDE 49

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-50
SLIDE 50

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-51
SLIDE 51

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-52
SLIDE 52

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-53
SLIDE 53

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-54
SLIDE 54

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-55
SLIDE 55

The number-of-calls complexity C s(Φ0)(M) (calls)

By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =

  • C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,

if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),

  • therwise.

(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =

  • C s(Φ0)(M0) + C s(Φ0)(M1),

if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −

Yiannis N. Moschovakis: The fixed point theory of complexity 7/10

slide-56
SLIDE 56

The mergesort algorithm

I Suppose L is ordered by ≤, L∗ is the set of all sequences from

L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program

p(u) where

  • p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),

q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))

  • which expresses the mergesort algorithm on L∗ms

◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)

I It is well-known that this is asymptotically the best upper

bound for sorting by “deterministic comparison algorithms”

Yiannis N. Moschovakis: The fixed point theory of complexity 8/10

slide-57
SLIDE 57

The mergesort algorithm

I Suppose L is ordered by ≤, L∗ is the set of all sequences from

L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program

p(u) where

  • p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),

q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))

  • which expresses the mergesort algorithm on L∗ms

◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)

I It is well-known that this is asymptotically the best upper

bound for sorting by “deterministic comparison algorithms”

Yiannis N. Moschovakis: The fixed point theory of complexity 8/10

slide-58
SLIDE 58

The mergesort algorithm

I Suppose L is ordered by ≤, L∗ is the set of all sequences from

L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program

p(u) where

  • p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),

q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))

  • which expresses the mergesort algorithm on L∗ms

◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)

I It is well-known that this is asymptotically the best upper

bound for sorting by “deterministic comparison algorithms”

Yiannis N. Moschovakis: The fixed point theory of complexity 8/10

slide-59
SLIDE 59

The mergesort algorithm

I Suppose L is ordered by ≤, L∗ is the set of all sequences from

L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program

p(u) where

  • p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),

q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))

  • which expresses the mergesort algorithm on L∗ms

◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)

I It is well-known that this is asymptotically the best upper

bound for sorting by “deterministic comparison algorithms”

Yiannis N. Moschovakis: The fixed point theory of complexity 8/10

slide-60
SLIDE 60

The mergesort algorithm

I Suppose L is ordered by ≤, L∗ is the set of all sequences from

L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program

p(u) where

  • p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),

q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))

  • which expresses the mergesort algorithm on L∗ms

◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)

I It is well-known that this is asymptotically the best upper

bound for sorting by “deterministic comparison algorithms”

Yiannis N. Moschovakis: The fixed point theory of complexity 8/10

slide-61
SLIDE 61

if M0 then M1 else M2 Mn ✙ ❫ M2 M0 ❂ M1 if M0 then M1 else M2 M0 ✠ M2 ❘ M1 φ(M1, . . . , Mn) ✢ ❫ . . . M2 q ✮ . . . ❂ Mn Ei(M1, . . . , Mn) M1 ❄ pi(M1, . . . , Mn) ③ s (M0 = ff) (M0 = t t) t t ff x φ

◮ D(M) is the depth of the computation tree for M

Yiannis N. Moschovakis: The fixed point theory of complexity 9/10

slide-62
SLIDE 62

Complexity inequalities and Tserunyan’s Theorem

◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(

x)

≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(

x)+1

where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E

⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a

constant Ks such that for every Φ-structure A and every

  • x ∈ An, if den(A, E(

x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)

I timeE(

x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)

I Perhaps this is why lower bound results are most often proved

by counting calls to the primitives

Yiannis N. Moschovakis: The fixed point theory of complexity 10/10

slide-63
SLIDE 63

Complexity inequalities and Tserunyan’s Theorem

◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(

x)

≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(

x)+1

where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E

⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a

constant Ks such that for every Φ-structure A and every

  • x ∈ An, if den(A, E(

x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)

I timeE(

x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)

I Perhaps this is why lower bound results are most often proved

by counting calls to the primitives

Yiannis N. Moschovakis: The fixed point theory of complexity 10/10

slide-64
SLIDE 64

Complexity inequalities and Tserunyan’s Theorem

◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(

x)

≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(

x)+1

where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E

⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a

constant Ks such that for every Φ-structure A and every

  • x ∈ An, if den(A, E(

x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)

I timeE(

x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)

I Perhaps this is why lower bound results are most often proved

by counting calls to the primitives

Yiannis N. Moschovakis: The fixed point theory of complexity 10/10

slide-65
SLIDE 65

Complexity inequalities and Tserunyan’s Theorem

◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(

x)

≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(

x)+1

where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E

⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a

constant Ks such that for every Φ-structure A and every

  • x ∈ An, if den(A, E(

x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)

I timeE(

x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)

I Perhaps this is why lower bound results are most often proved

by counting calls to the primitives

Yiannis N. Moschovakis: The fixed point theory of complexity 10/10

slide-66
SLIDE 66

Complexity inequalities and Tserunyan’s Theorem

◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(

x)

≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(

x)+1

where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E

⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a

constant Ks such that for every Φ-structure A and every

  • x ∈ An, if den(A, E(

x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)

I timeE(

x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)

I Perhaps this is why lower bound results are most often proved

by counting calls to the primitives

Yiannis N. Moschovakis: The fixed point theory of complexity 10/10

slide-67
SLIDE 67

Complexity inequalities and Tserunyan’s Theorem

◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(

x)

≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(

x)+1

where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E

⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a

constant Ks such that for every Φ-structure A and every

  • x ∈ An, if den(A, E(

x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)

I timeE(

x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)

I Perhaps this is why lower bound results are most often proved

by counting calls to the primitives

Yiannis N. Moschovakis: The fixed point theory of complexity 10/10