The fixed point theory of complexity Yiannis N. Moschovakis UCLA - - PowerPoint PPT Presentation
The fixed point theory of complexity Yiannis N. Moschovakis UCLA - - PowerPoint PPT Presentation
The fixed point theory of complexity Yiannis N. Moschovakis UCLA and University of Athens a commercial for Abstract Recursion and Intrinsic Complexity Published by CUP, ASL LNL Series # 48 Posted on my homepage Panhellenic Logic Symposium 12,
The fixed point theory of complexity
Yiannis N. Moschovakis UCLA and University of Athens a commercial for Abstract Recursion and Intrinsic Complexity Published by CUP, ASL LNL Series # 48 Posted on my homepage Panhellenic Logic Symposium 12, June 26 – 30, 2019, Anogeia
Denotational semantics for programming languages
I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming
language) its denotation, the object den(E) computed by E, typically a function or relation of some sort
I The key mathematical tools it uses are fixed point theorems in
various complete partially ordered sets (domains)
I It is important because it provides a precise, mathematical
criterion of correctness for programs, which should compute what we wanted them to compute
I It has developed into a rich, intricate mathematical theory,
not always easy to apply in specific cases
⋆ den(E) gives no information about the complexity of the
algorithm expressed by E
Yiannis N. Moschovakis: The fixed point theory of complexity 1/10
Denotational semantics for programming languages
I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming
language) its denotation, the object den(E) computed by E, typically a function or relation of some sort
I The key mathematical tools it uses are fixed point theorems in
various complete partially ordered sets (domains)
I It is important because it provides a precise, mathematical
criterion of correctness for programs, which should compute what we wanted them to compute
I It has developed into a rich, intricate mathematical theory,
not always easy to apply in specific cases
⋆ den(E) gives no information about the complexity of the
algorithm expressed by E
Yiannis N. Moschovakis: The fixed point theory of complexity 1/10
Denotational semantics for programming languages
I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming
language) its denotation, the object den(E) computed by E, typically a function or relation of some sort
I The key mathematical tools it uses are fixed point theorems in
various complete partially ordered sets (domains)
I It is important because it provides a precise, mathematical
criterion of correctness for programs, which should compute what we wanted them to compute
I It has developed into a rich, intricate mathematical theory,
not always easy to apply in specific cases
⋆ den(E) gives no information about the complexity of the
algorithm expressed by E
Yiannis N. Moschovakis: The fixed point theory of complexity 1/10
Denotational semantics for programming languages
I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming
language) its denotation, the object den(E) computed by E, typically a function or relation of some sort
I The key mathematical tools it uses are fixed point theorems in
various complete partially ordered sets (domains)
I It is important because it provides a precise, mathematical
criterion of correctness for programs, which should compute what we wanted them to compute
I It has developed into a rich, intricate mathematical theory,
not always easy to apply in specific cases
⋆ den(E) gives no information about the complexity of the
algorithm expressed by E
Yiannis N. Moschovakis: The fixed point theory of complexity 1/10
Denotational semantics for programming languages
I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming
language) its denotation, the object den(E) computed by E, typically a function or relation of some sort
I The key mathematical tools it uses are fixed point theorems in
various complete partially ordered sets (domains)
I It is important because it provides a precise, mathematical
criterion of correctness for programs, which should compute what we wanted them to compute
I It has developed into a rich, intricate mathematical theory,
not always easy to apply in specific cases
⋆ den(E) gives no information about the complexity of the
algorithm expressed by E
Yiannis N. Moschovakis: The fixed point theory of complexity 1/10
Denotational semantics for programming languages
I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming
language) its denotation, the object den(E) computed by E, typically a function or relation of some sort
I The key mathematical tools it uses are fixed point theorems in
various complete partially ordered sets (domains)
I It is important because it provides a precise, mathematical
criterion of correctness for programs, which should compute what we wanted them to compute
I It has developed into a rich, intricate mathematical theory,
not always easy to apply in specific cases
⋆ den(E) gives no information about the complexity of the
algorithm expressed by E
Yiannis N. Moschovakis: The fixed point theory of complexity 1/10
Denotational semantics for programming languages
I Introduced in 1971 by Dana Scott and Christopher Strachey I Assigns to every program E (of a well specified programming
language) its denotation, the object den(E) computed by E, typically a function or relation of some sort
I The key mathematical tools it uses are fixed point theorems in
various complete partially ordered sets (domains)
I It is important because it provides a precise, mathematical
criterion of correctness for programs, which should compute what we wanted them to compute
I It has developed into a rich, intricate mathematical theory,
not always easy to apply in specific cases
⋆ den(E) gives no information about the complexity of the
algorithm expressed by E
Yiannis N. Moschovakis: The fixed point theory of complexity 1/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
The Euclidean algorithm for coprimeness
- n N = {0, 1, . . .}, with division
gcd(x, y) = the greatest common divisor of x, y (x, y ≥ 1) x ⊥ ⊥ y ⇐ ⇒ gcd(x, y) = 1 rem(x, y) = r ⇐ ⇒ [x = yq + r & r < y], eqw(x) ⇐ ⇒ x = w (ε) vars x, y while (y = 0) [(x, y) := (y, rem(x, y))]; return eq1(x) ◮ If x, y ≥ 1, then den(ε)(x, y) ⇐ ⇒ x ⊥ ⊥ y
I Def. callsε(rem)(x, y) = the number of divisions
(calls to rem) used by ε to decide x ⊥ ⊥ y ◮ If x ≥ y and y ≥ 2, then callsε(rem)(x, y) ≤ 2 log y ◮ For a fixed ¯ r > 0 and all the Fibonacci numbers Fk with k ≥ 2, callsε(rem)(Fk+1, Fk) = k − 1 ≥ ¯ r log Fk+1
⋆ Is the Euclidean worst-case optimal from its primitives?
Yiannis N. Moschovakis: The fixed point theory of complexity 2/10
Intensional properties of programs
I To extract from a program E many of its intensional
properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call
I We consider only programs which compute (partial) functions
and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ
I For the Euclidean: Aε = (N, eq0, eq1, rem)
(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where
- p(x, y) = if eq0(y) then x else p(y, rem(x, y))
- Yiannis N. Moschovakis: The fixed point theory of complexity
3/10
Intensional properties of programs
I To extract from a program E many of its intensional
properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call
I We consider only programs which compute (partial) functions
and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ
I For the Euclidean: Aε = (N, eq0, eq1, rem)
(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where
- p(x, y) = if eq0(y) then x else p(y, rem(x, y))
- Yiannis N. Moschovakis: The fixed point theory of complexity
3/10
Intensional properties of programs
I To extract from a program E many of its intensional
properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call
I We consider only programs which compute (partial) functions
and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ
I For the Euclidean: Aε = (N, eq0, eq1, rem)
(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where
- p(x, y) = if eq0(y) then x else p(y, rem(x, y))
- Yiannis N. Moschovakis: The fixed point theory of complexity
3/10
Intensional properties of programs
I To extract from a program E many of its intensional
properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call
I We consider only programs which compute (partial) functions
and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ
I For the Euclidean: Aε = (N, eq0, eq1, rem)
(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where
- p(x, y) = if eq0(y) then x else p(y, rem(x, y))
- Yiannis N. Moschovakis: The fixed point theory of complexity
3/10
Intensional properties of programs
I To extract from a program E many of its intensional
properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call
I We consider only programs which compute (partial) functions
and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ
I For the Euclidean: Aε = (N, eq0, eq1, rem)
(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where
- p(x, y) = if eq0(y) then x else p(y, rem(x, y))
- Yiannis N. Moschovakis: The fixed point theory of complexity
3/10
Intensional properties of programs
I To extract from a program E many of its intensional
properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call
I We consider only programs which compute (partial) functions
and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ
I For the Euclidean: Aε = (N, eq0, eq1, rem)
(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where
- p(x, y) = if eq0(y) then x else p(y, rem(x, y))
- Yiannis N. Moschovakis: The fixed point theory of complexity
3/10
Intensional properties of programs
I To extract from a program E many of its intensional
properties (including complexity functions) using fixed point theory you need to make two moves: (1) Identify the primitives that E can call
I We consider only programs which compute (partial) functions
and decide relations on some set A from primitives of the same kind, i.e., programs of a (partial) first order structure A = (A, Φ) of vocabulary (characteristic, similarity type) Φ
I For the Euclidean: Aε = (N, eq0, eq1, rem)
(2) Translate E faithfully into a recursive (McCarthy) program of A (using routine, well-understood methods) (ε) eq1(p(x, y)) where
- p(x, y) = if eq0(y) then x else p(y, rem(x, y))
- Yiannis N. Moschovakis: The fixed point theory of complexity
3/10
Recursive programs in the vocabulary Φ
I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):
E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n
i
(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n
i
is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;
I Recursive Φ-programs:
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- where each Ei is a Φ-term whose individual variables are in the list
- xi (with
x0 = x) and its recursive variables are among p1, . . . , pK, . .
I Semantics: The body of E defines a system of recursive equations
p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)
Yiannis N. Moschovakis: The fixed point theory of complexity 4/10
Recursive programs in the vocabulary Φ
I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):
E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n
i
(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n
i
is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;
I Recursive Φ-programs:
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- where each Ei is a Φ-term whose individual variables are in the list
- xi (with
x0 = x) and its recursive variables are among p1, . . . , pK, . .
I Semantics: The body of E defines a system of recursive equations
p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)
Yiannis N. Moschovakis: The fixed point theory of complexity 4/10
Recursive programs in the vocabulary Φ
I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):
E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n
i
(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n
i
is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;
I Recursive Φ-programs:
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- where each Ei is a Φ-term whose individual variables are in the list
- xi (with
x0 = x) and its recursive variables are among p1, . . . , pK, . .
I Semantics: The body of E defines a system of recursive equations
p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)
Yiannis N. Moschovakis: The fixed point theory of complexity 4/10
Recursive programs in the vocabulary Φ
I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):
E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n
i
(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n
i
is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;
I Recursive Φ-programs:
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- where each Ei is a Φ-term whose individual variables are in the list
- xi (with
x0 = x) and its recursive variables are among p1, . . . , pK, . .
I Semantics: The body of E defines a system of recursive equations
p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)
Yiannis N. Moschovakis: The fixed point theory of complexity 4/10
Recursive programs in the vocabulary Φ
I Φ-terms (pure, explicit, of boolean or ind sort, with rec variables):
E :≡ t t | ff | vi | φ(E1, . . . , Enφ) | if E0 then E1 else E2 | qs,n
i
(E1, . . . , En) where v0, v1, . . . are formal individual variables over A; each qs,n
i
is a formal variable over partial functions on A, of arity n and (boolean or individual) sort s;
I Recursive Φ-programs:
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- where each Ei is a Φ-term whose individual variables are in the list
- xi (with
x0 = x) and its recursive variables are among p1, . . . , pK, . .
I Semantics: The body of E defines a system of recursive equations
p1( x1) = f1( x1, p), . . . pK( xK) = fK( xK, pK) ; this has least solutions p1, . . . , pK; and den(E) is the partial function defined by plugging these solutions in the head E0( x)
Yiannis N. Moschovakis: The fixed point theory of complexity 4/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
⋆ The tree-depth complexity in A = (A, Φ)
E ≡ E0( x) where
- p1(
x1) = E1( x1), . . . , pK( xK) = EK( xK)
- I The convergent parts: M ∈ Conv(A, E) if M ≡ N{
y :≡ y}) where N is a subterm of E, y ∈ Am and M = den((A, p1, . . . , pK), M)↓ ◮ There is exactly one function D : Conv(A, E) → N such that : (D1) D(t t) = D(ff) = D(x) = D(φ) = 0 (if arity(φ) = 0 and φA↓) (D2) D(φ(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm)} + 1 (D3) If M ≡ if M0 then M1 else M2, then D(M) =
- max{D(M0), D(M1)} + 1,
if M0 = t t, max{D(M0), D(M2)} + 1, if M0 = ff (D4) If pi is a recursive variable of E of arity m, then D(pi(M1, . . . , Mm)) = max{D(M1), . . . , D(Mm), D(Ei(M1, . . . , Mm))}+1 Proved by analyzing the construction of the least solutions p1, . . . , pK
Yiannis N. Moschovakis: The fixed point theory of complexity 5/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The sequential logical complexity Ls(M) (time)
By recursion on D(M): (Ls1) Ls(t t) = Ls(ff) = Ls(x) = 0, Ls(φ) = 1 if arity(φ) = 0 and φA↓ (Ls2) Ls(φ(M1, . . . , Mn)) = Ls(M1) + Ls(M2) + · · · + Ls(Mn) + 1 (Ls3) If M ≡ if M0 then M1 else M2, then Ls(M) =
- Ls(M0) + Ls(M1) + 1
if M0 = t t, Ls(M0) + Ls(M2) + 1 if M0 = ff (Ls4) Ls(pi(M1, . . . , Mn)) = Ls(M1) + · · · + Ls(Mn) + Ls(Ei(M1, . . . , Mn)) + 1 timeE( x) = ls(A, E( x)) =df Ls(E0( x)) (denA
E(
x)↓)
I timeE(
x) counts the number of steps required for the computation of den(E)( x) using “the algorithm expressed by” E ◮ If E simulates a program E −, then for some K, L and all x, timeE( x) ≤ TIME(E −, x) ≤ KtimeE( x) + L
Yiannis N. Moschovakis: The fixed point theory of complexity 6/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The number-of-calls complexity C s(Φ0)(M) (calls)
By recursion on D(M), for any Φ0 ⊆ Φ: (C s1) C s(Φ0)(t t) = C s(Φ0)(ff) = C s(Φ0)(x) = 0 (x ∈ A); and if arity(φ) = 0 and φ↓, C s(Φ0)(φ) = 0 if φ / ∈ Φ0, and C s(Φ0)(φ) = 1 if φ ∈ Φ0. (C s2) If M ≡ φ(M1, . . . , Mn), then C s(Φ0)(M) =
- C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + 1,
if φ ∈ Φ0, C s(Φ0)(M1) + · · · + C s(Φ0)(Mn),
- therwise.
(C s3) If M ≡ if M0 then M1 else M2, then C s(Φ0)(M) =
- C s(Φ0)(M0) + C s(Φ0)(M1),
if M0 = t t, C s(Φ0)(M0) + C s(Φ0)(M2), if M0 = ff. (C s4) If M ≡ pi(M1, . . . , Mn) with pi a recursive variable of E, then C s(Φ0)(M) = C s(Φ0)(M1) + · · · + C s(Φ0)(Mn) + C s(Φ0)(Ei(M1, . . . , Mn)) ◮ If E simulates a program E −, then C s(Φ0)(E0( x)) “agrees” with the number-of-Φ0-calls complexity of E −
Yiannis N. Moschovakis: The fixed point theory of complexity 7/10
The mergesort algorithm
I Suppose L is ordered by ≤, L∗ is the set of all sequences from
L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program
p(u) where
- p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),
q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))
- which expresses the mergesort algorithm on L∗ms
◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)
I It is well-known that this is asymptotically the best upper
bound for sorting by “deterministic comparison algorithms”
Yiannis N. Moschovakis: The fixed point theory of complexity 8/10
The mergesort algorithm
I Suppose L is ordered by ≤, L∗ is the set of all sequences from
L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program
p(u) where
- p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),
q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))
- which expresses the mergesort algorithm on L∗ms
◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)
I It is well-known that this is asymptotically the best upper
bound for sorting by “deterministic comparison algorithms”
Yiannis N. Moschovakis: The fixed point theory of complexity 8/10
The mergesort algorithm
I Suppose L is ordered by ≤, L∗ is the set of all sequences from
L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program
p(u) where
- p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),
q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))
- which expresses the mergesort algorithm on L∗ms
◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)
I It is well-known that this is asymptotically the best upper
bound for sorting by “deterministic comparison algorithms”
Yiannis N. Moschovakis: The fixed point theory of complexity 8/10
The mergesort algorithm
I Suppose L is ordered by ≤, L∗ is the set of all sequences from
L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program
p(u) where
- p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),
q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))
- which expresses the mergesort algorithm on L∗ms
◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)
I It is well-known that this is asymptotically the best upper
bound for sorting by “deterministic comparison algorithms”
Yiannis N. Moschovakis: The fixed point theory of complexity 8/10
The mergesort algorithm
I Suppose L is ordered by ≤, L∗ is the set of all sequences from
L, L∗ms = (L∗, half1, half2, ≤) is the indicated expansion of the standard Lisp structure L∗ and consider the recursive program
p(u) where
- p(u) = if (tail(u) = nil) then u else q(p(half1(u)), p(half2(u))),
q(w, v) = if (w = nil) then v else if (v = nil) then w else if (head(w) ≤ head(v)) then cons(head(w), q(tail(w), v)) else cons(head(v), q(w, tail(v)))
- which expresses the mergesort algorithm on L∗ms
◮ The mergesort computes the sorted version sort(u) of each sequence u ∈ L∗ so that calls(≤)(u) ≤ |u| log |u| (|u| ≥ 2)
I It is well-known that this is asymptotically the best upper
bound for sorting by “deterministic comparison algorithms”
Yiannis N. Moschovakis: The fixed point theory of complexity 8/10
if M0 then M1 else M2 Mn ✙ ❫ M2 M0 ❂ M1 if M0 then M1 else M2 M0 ✠ M2 ❘ M1 φ(M1, . . . , Mn) ✢ ❫ . . . M2 q ✮ . . . ❂ Mn Ei(M1, . . . , Mn) M1 ❄ pi(M1, . . . , Mn) ③ s (M0 = ff) (M0 = t t) t t ff x φ
◮ D(M) is the depth of the computation tree for M
Yiannis N. Moschovakis: The fixed point theory of complexity 9/10
Complexity inequalities and Tserunyan’s Theorem
◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(
x)
≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(
x)+1
where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E
⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a
constant Ks such that for every Φ-structure A and every
- x ∈ An, if den(A, E(
x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)
I timeE(
x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)
I Perhaps this is why lower bound results are most often proved
by counting calls to the primitives
Yiannis N. Moschovakis: The fixed point theory of complexity 10/10
Complexity inequalities and Tserunyan’s Theorem
◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(
x)
≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(
x)+1
where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E
⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a
constant Ks such that for every Φ-structure A and every
- x ∈ An, if den(A, E(
x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)
I timeE(
x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)
I Perhaps this is why lower bound results are most often proved
by counting calls to the primitives
Yiannis N. Moschovakis: The fixed point theory of complexity 10/10
Complexity inequalities and Tserunyan’s Theorem
◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(
x)
≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(
x)+1
where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E
⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a
constant Ks such that for every Φ-structure A and every
- x ∈ An, if den(A, E(
x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)
I timeE(
x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)
I Perhaps this is why lower bound results are most often proved
by counting calls to the primitives
Yiannis N. Moschovakis: The fixed point theory of complexity 10/10
Complexity inequalities and Tserunyan’s Theorem
◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(
x)
≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(
x)+1
where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E
⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a
constant Ks such that for every Φ-structure A and every
- x ∈ An, if den(A, E(
x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)
I timeE(
x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)
I Perhaps this is why lower bound results are most often proved
by counting calls to the primitives
Yiannis N. Moschovakis: The fixed point theory of complexity 10/10
Complexity inequalities and Tserunyan’s Theorem
◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(
x)
≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(
x)+1
where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E
⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a
constant Ks such that for every Φ-structure A and every
- x ∈ An, if den(A, E(
x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)
I timeE(
x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)
I Perhaps this is why lower bound results are most often proved
by counting calls to the primitives
Yiannis N. Moschovakis: The fixed point theory of complexity 10/10
Complexity inequalities and Tserunyan’s Theorem
◮ For a fixed Φ-structure A and a recursive Φ-program E calls( x) (ℓ + 1)p-calls(
x)
≤ ≤ ≤ p-calls( x) time( x) ≤ ≤ ≤ p-time( x) (ℓ + 1)d(
x)+1
where calls = calls(Φ), d( x) = D(E0( x) and ℓ = largest arity in E
⋆ (Tserunyan 2013) For every recursive Φ-program E, there is a
constant Ks such that for every Φ-structure A and every
- x ∈ An, if den(A, E(
x))↓, then callsE( x) ≤ timeE( x) ≤ Ks + KscallsE( x)
I timeE(
x) = #(logical calls)E( x) + calls(Φ)E( x) ≈ Kscalls(Φ)E( x)
I Perhaps this is why lower bound results are most often proved
by counting calls to the primitives
Yiannis N. Moschovakis: The fixed point theory of complexity 10/10