SLIDE 1
Computability Theory and Asymptotic Density Denis R. Hirschfeldt - - PowerPoint PPT Presentation
Computability Theory and Asymptotic Density Denis R. Hirschfeldt - - PowerPoint PPT Presentation
Computability Theory and Asymptotic Density Denis R. Hirschfeldt University of Chicago Groups and Computation: In Honor of Paul Schupps 80th Birthday Cantor space and relative computability We work in the space 2 of infinite binary
SLIDE 2
SLIDE 3
Cantor space and relative computability We work in the space 2ω of infinite binary sequences, identifying A ∈ 2ω with {n : A(n) = 1}. For σ ∈ 2<ω, let [σ] = {A ∈ 2ω : σ ≺ A}. These are the basic open sets of the standard topology on 2ω.
SLIDE 4
Cantor space and relative computability We work in the space 2ω of infinite binary sequences, identifying A ∈ 2ω with {n : A(n) = 1}. For σ ∈ 2<ω, let [σ] = {A ∈ 2ω : σ ≺ A}. These are the basic open sets of the standard topology on 2ω. Let µ the measure on 2ω defined by µ([σ]) = 2−|σ|.
SLIDE 5
Cantor space and relative computability We work in the space 2ω of infinite binary sequences, identifying A ∈ 2ω with {n : A(n) = 1}. For σ ∈ 2<ω, let [σ] = {A ∈ 2ω : σ ≺ A}. These are the basic open sets of the standard topology on 2ω. Let µ the measure on 2ω defined by µ([σ]) = 2−|σ|. We write A T B to mean that A is computable relative to B. A ≡T B if A T B and B T A. The ≡T-equivalence classes are the (Turing) degrees.
SLIDE 6
Describing a set We consider partial descriptions of elements of 2ω.
SLIDE 7
Describing a set We consider partial descriptions of elements of 2ω. For an input n, such a description might:
- give an answer (correctly or not);
- never give an answer;
- declare that it will not give an answer.
SLIDE 8
Describing a set We consider partial descriptions of elements of 2ω. For an input n, such a description might:
- give an answer (correctly or not);
- never give an answer;
- declare that it will not give an answer.
A description of A ∈ 2ω is a partial function ∆ : N → {0, 1, }. We write ∆(n)↓ if ∆(n) is defined, and ∆(n)↑ otherwise.
SLIDE 9
Describing a set We consider partial descriptions of elements of 2ω. For an input n, such a description might:
- give an answer (correctly or not);
- never give an answer;
- declare that it will not give an answer.
A description of A ∈ 2ω is a partial function ∆ : N → {0, 1, }. We write ∆(n)↓ if ∆(n) is defined, and ∆(n)↑ otherwise. When might we say that ∆ describes A “almost everywhere”?
SLIDE 10
Asymptotic density Let S ⊆ N. The upper (asymptotic) density of S is ρ(S) = lim supn |S ∩ [0, n)| n . The lower (asymptotic) density of S is ρ(S) = lim infn |S ∩ [0, n)| n . If ρ(S) = ρ(S) then S has (asymptotic) density ρ(S) = ρ(S).
SLIDE 11
Asymptotic density Let S ⊆ N. The upper (asymptotic) density of S is ρ(S) = lim supn |S ∩ [0, n)| n . The lower (asymptotic) density of S is ρ(S) = lim infn |S ∩ [0, n)| n . If ρ(S) = ρ(S) then S has (asymptotic) density ρ(S) = ρ(S). We think of sets of density 0 as negligible.
SLIDE 12
Asymptotic descriptions There are three ways a description ∆ of A can be incorrect at n:
- ∆(n)↑
- ∆(n)↓ = 1 − A(n)
- ∆(n)↓ =
SLIDE 13
Asymptotic descriptions There are three ways a description ∆ of A can be incorrect at n:
- ∆(n)↑
- ∆(n)↓ = 1 − A(n)
- ∆(n)↓ =
Let D = {n : ∆(n)↑}, let M = {n : ∆(n)↓ = 1 − A(n)}, and let R = {n : ∆(n)↓ = }.
SLIDE 14
Asymptotic descriptions There are three ways a description ∆ of A can be incorrect at n:
- ∆(n)↑
- ∆(n)↓ = 1 − A(n)
- ∆(n)↓ =
Let D = {n : ∆(n)↑}, let M = {n : ∆(n)↓ = 1 − A(n)}, and let R = {n : ∆(n)↓ = }. ∆ is a dense description of A if D ∪ M ∪ R has density 0.
SLIDE 15
Asymptotic descriptions There are three ways a description ∆ of A can be incorrect at n:
- ∆(n)↑
- ∆(n)↓ = 1 − A(n)
- ∆(n)↓ =
Let D = {n : ∆(n)↑}, let M = {n : ∆(n)↓ = 1 − A(n)}, and let R = {n : ∆(n)↓ = }. ∆ is a dense description of A if D ∪ M ∪ R has density 0. If ∆ is a dense description then it is:
- a generic description of A if M ∪ R = ∅.
- a coarse description of A if D ∪ R = ∅.
SLIDE 16
Asymptotic descriptions There are three ways a description ∆ of A can be incorrect at n:
- ∆(n)↑
- ∆(n)↓ = 1 − A(n)
- ∆(n)↓ =
Let D = {n : ∆(n)↑}, let M = {n : ∆(n)↓ = 1 − A(n)}, and let R = {n : ∆(n)↓ = }. ∆ is a dense description of A if D ∪ M ∪ R has density 0. If ∆ is a dense description then it is:
- a generic description of A if M ∪ R = ∅.
- a coarse description of A if D ∪ R = ∅.
- an effective dense description of A if D ∪ M = ∅.
SLIDE 17
Asymptotic computability A is densely computable if it has a computable dense description. A is generically computable if it has a computable generic description. A is coarsely computable if it has a computable coarse description. A is effectively densely computable if it has a computable effective dense description.
SLIDE 18
Asymptotic computability A is densely computable if it has a computable dense description. A is generically computable if it has a computable generic description. A is coarsely computable if it has a computable coarse description. A is effectively densely computable if it has a computable effective dense description. These notions can be relativized to define dense computability relative to X, etc.
SLIDE 19
Asymptotic computability: history and prehistory Generic computability was introduced by Kapovich, Myasnikov, Schupp, and Shpilrain (2003). It was studied by Jockusch and Schupp (2012), who also studied coarse computability.
SLIDE 20
Asymptotic computability: history and prehistory Generic computability was introduced by Kapovich, Myasnikov, Schupp, and Shpilrain (2003). It was studied by Jockusch and Schupp (2012), who also studied coarse computability. Dense and effective dense computability were studied by Astor, Hirschfeldt, and Jockusch (ta).
SLIDE 21
Asymptotic computability: history and prehistory Generic computability was introduced by Kapovich, Myasnikov, Schupp, and Shpilrain (2003). It was studied by Jockusch and Schupp (2012), who also studied coarse computability. Dense and effective dense computability were studied by Astor, Hirschfeldt, and Jockusch (ta). Meyer (1973) defined effective dense computability, and asked a question answered by Lynch (1974). Terwijn (1998) studied coarse computability.
SLIDE 22
Relationships between notions of asymptotic computability computable ↓ effectively densely computable ւ ց generically coarsely computable computable ց ւ densely computable No implications other than the ones shown hold in general.
SLIDE 23
Examples from Jockusch and Schupp Every Turing degree contains a set that is effectively densely computable: Given X, consider {2n : n ∈ X}.
SLIDE 24
Examples from Jockusch and Schupp Every Turing degree contains a set that is effectively densely computable: Given X, consider {2n : n ∈ X}. Every nontrivial Turing degree contains a set that is not densely computable: Given X, consider
n∈X[2n, 2n+1).
SLIDE 25
Examples from Jockusch and Schupp Every Turing degree contains a set that is effectively densely computable: Given X, consider {2n : n ∈ X}. Every nontrivial Turing degree contains a set that is not densely computable: Given X, consider
n∈X[2n, 2n+1).
A set is simple if it is co-infinite and computably enumerable, and its complement does not contain an infinite c.e. set. A simple set of density 0 is coarsely computable but not generically computable.
SLIDE 26
Examples from Jockusch and Schupp Every Turing degree contains a set that is effectively densely computable: Given X, consider {2n : n ∈ X}. Every nontrivial Turing degree contains a set that is not densely computable: Given X, consider
n∈X[2n, 2n+1).
A set is simple if it is co-infinite and computably enumerable, and its complement does not contain an infinite c.e. set. A simple set of density 0 is coarsely computable but not generically computable. There is a c.e. set that is generically computable but not coarsely computable.
SLIDE 27
Examples from Jockusch and Schupp Every Turing degree contains a set that is effectively densely computable: Given X, consider {2n : n ∈ X}. Every nontrivial Turing degree contains a set that is not densely computable: Given X, consider
n∈X[2n, 2n+1).
A set is simple if it is co-infinite and computably enumerable, and its complement does not contain an infinite c.e. set. A simple set of density 0 is coarsely computable but not generically computable. There is a c.e. set that is generically computable but not coarsely computable. {A : A is densely computable} has measure 0 and is meager.
SLIDE 28
Asymptotic reducibilities A is coarsely reducible to B, written A c B, if every coarse description of B computes a coarse description of A.
SLIDE 29
Asymptotic reducibilities A is coarsely reducible to B, written A c B, if every coarse description of B computes a coarse description of A. There are nonuniform and uniform versions of this reducibility, but we will ignore the distinction.
SLIDE 30
Asymptotic reducibilities A is coarsely reducible to B, written A c B, if every coarse description of B computes a coarse description of A. There are nonuniform and uniform versions of this reducibility, but we will ignore the distinction. A c ∅ iff A is coarsely computable.
SLIDE 31
Asymptotic reducibilities A is coarsely reducible to B, written A c B, if every coarse description of B computes a coarse description of A. There are nonuniform and uniform versions of this reducibility, but we will ignore the distinction. A c ∅ iff A is coarsely computable. A ≡c B if A c B and B c A. The ≡c-equivalence classes are the coarse degrees.
SLIDE 32
Asymptotic reducibilities A is coarsely reducible to B, written A c B, if every coarse description of B computes a coarse description of A. There are nonuniform and uniform versions of this reducibility, but we will ignore the distinction. A c ∅ iff A is coarsely computable. A ≡c B if A c B and B c A. The ≡c-equivalence classes are the coarse degrees. We can similarly define generic reducibility g, dense reducibility d, and effective dense reducibility ed.
SLIDE 33
Upper cones and minimal pairs in the Turing degrees Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0.
SLIDE 34
Upper cones and minimal pairs in the Turing degrees Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0. If Y is not computable then {A : A T Y} is countable, so µ
- 0<TATY
{X : A T X} = 0.
SLIDE 35
Upper cones and minimal pairs in the Turing degrees Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0. If Y is not computable then {A : A T Y} is countable, so µ
- 0<TATY
{X : A T X} = 0. Thus there is an X s.t. if A T X, Y then A is computable. We say that the Turing degrees of X and Y form a minimal pair.
SLIDE 36
Upper cones and minimal pairs in the Turing degrees Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0. If Y is not computable then {A : A T Y} is countable, so µ
- 0<TATY
{X : A T X} = 0. Thus there is an X s.t. if A T X, Y then A is computable. We say that the Turing degrees of X and Y form a minimal pair. By this argument, most pairs of Turing degrees are minimal pairs.
SLIDE 37
Upper cones and minimal pairs in the coarse degrees Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If A c ∅ then µ({X : A is coarsely computable relative to X}) = 0. So proper upper cones in the coarse degrees have measure 0.
SLIDE 38
Upper cones and minimal pairs in the coarse degrees Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If A c ∅ then µ({X : A is coarsely computable relative to X}) = 0. So proper upper cones in the coarse degrees have measure 0. Cor (Hirschfeldt, Jockusch, Kuyper, and Schupp). There are X, Y c ∅ s.t. any A that is coarsely computable relative both to X and to Y is coarsely computable. In particular, if A c X, Y then A c ∅, so there are minimal pairs in the coarse degrees.
SLIDE 39
Upper cones and minimal pairs in the coarse degrees Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If A c ∅ then µ({X : A is coarsely computable relative to X}) = 0. So proper upper cones in the coarse degrees have measure 0. Cor (Hirschfeldt, Jockusch, Kuyper, and Schupp). There are X, Y c ∅ s.t. any A that is coarsely computable relative both to X and to Y is coarsely computable. In particular, if A c X, Y then A c ∅, so there are minimal pairs in the coarse degrees. Astor, Hirschfeldt, and Jockusch showed that these facts also hold for dense computability.
SLIDE 40
Upper cones and minimal pairs in the generic degrees Thm (Astor, Hirschfeldt, and Jockusch). If A g ∅ then µ({X : A is generically computable relative to X}) = 0. So proper upper cones in the generic degrees have measure 0.
SLIDE 41
Upper cones and minimal pairs in the generic degrees Thm (Astor, Hirschfeldt, and Jockusch). If A g ∅ then µ({X : A is generically computable relative to X}) = 0. So proper upper cones in the generic degrees have measure 0. Thm (Igusa). If X, Y g ∅ then there is an A g ∅ s.t. A is generically computable relative both to X and to Y.
SLIDE 42
Upper cones and minimal pairs in the generic degrees Thm (Astor, Hirschfeldt, and Jockusch). If A g ∅ then µ({X : A is generically computable relative to X}) = 0. So proper upper cones in the generic degrees have measure 0. Thm (Igusa). If X, Y g ∅ then there is an A g ∅ s.t. A is generically computable relative both to X and to Y. Open Question. Are there minimal pairs in the generic degrees?
SLIDE 43
Upper cones and minimal pairs in the generic degrees Thm (Astor, Hirschfeldt, and Jockusch). If A g ∅ then µ({X : A is generically computable relative to X}) = 0. So proper upper cones in the generic degrees have measure 0. Thm (Igusa). If X, Y g ∅ then there is an A g ∅ s.t. A is generically computable relative both to X and to Y. Open Question. Are there minimal pairs in the generic degrees? Astor, Hirschfeldt, and Jockusch showed that the situation is the same for effective dense computability.
SLIDE 44
Algorithmic randomness We can use computability theory to define notions of randomness for individual elements of 2ω.
SLIDE 45
Algorithmic randomness We can use computability theory to define notions of randomness for individual elements of 2ω. Early attempts focused on specific statistical tests.
SLIDE 46
Algorithmic randomness We can use computability theory to define notions of randomness for individual elements of 2ω. Early attempts focused on specific statistical tests. Martin-L¨
- f gave the first robust definition, based on the idea of
considering all (theoretically) performable tests.
SLIDE 47
Algorithmic randomness We can use computability theory to define notions of randomness for individual elements of 2ω. Early attempts focused on specific statistical tests. Martin-L¨
- f gave the first robust definition, based on the idea of
considering all (theoretically) performable tests. A Martin-L¨
- f test is an open set U ⊂ 2ω of measure 0, given as an
intersection of rapidly shrinking effectively open sets.
SLIDE 48
Algorithmic randomness We can use computability theory to define notions of randomness for individual elements of 2ω. Early attempts focused on specific statistical tests. Martin-L¨
- f gave the first robust definition, based on the idea of
considering all (theoretically) performable tests. A Martin-L¨
- f test is an open set U ⊂ 2ω of measure 0, given as an
intersection of rapidly shrinking effectively open sets. X ∈ 2ω passes this test if X / ∈ U. X is 1-random (or Martin-L¨
- f random) if it passes every ML-test.
SLIDE 49
Algorithmic randomness By varying the complexity of tests, we obtain a hierarchy:
1-random ⇐ weakly 2-random ⇐ 2-random ⇐ weakly 3-random ⇐ · · ·
SLIDE 50
Algorithmic randomness By varying the complexity of tests, we obtain a hierarchy:
1-random ⇐ weakly 2-random ⇐ 2-random ⇐ weakly 3-random ⇐ · · ·
We can also relativize these notions.
SLIDE 51
Algorithmic randomness By varying the complexity of tests, we obtain a hierarchy:
1-random ⇐ weakly 2-random ⇐ 2-random ⇐ weakly 3-random ⇐ · · ·
We can also relativize these notions. Write X = X0 ⊕ X1 ⊕ · · · ⊕ Xn−1 to mean that Xk = {i : ni + k ∈ X}.
SLIDE 52
Algorithmic randomness By varying the complexity of tests, we obtain a hierarchy:
1-random ⇐ weakly 2-random ⇐ 2-random ⇐ weakly 3-random ⇐ · · ·
We can also relativize these notions. Write X = X0 ⊕ X1 ⊕ · · · ⊕ Xn−1 to mean that Xk = {i : ni + k ∈ X}. Thm (van Lambalgen). X = X0 ⊕ X1 is 1-random iff X0 and X1 are 1-random relative to each other.
SLIDE 53
Algorithmic randomness By varying the complexity of tests, we obtain a hierarchy:
1-random ⇐ weakly 2-random ⇐ 2-random ⇐ weakly 3-random ⇐ · · ·
We can also relativize these notions. Write X = X0 ⊕ X1 ⊕ · · · ⊕ Xn−1 to mean that Xk = {i : ni + k ∈ X}. Thm (van Lambalgen). X = X0 ⊕ X1 is 1-random iff X0 and X1 are 1-random relative to each other. In this case, X0 and X1 are “a bit more random” than X.
SLIDE 54
Algorithmic randomness By varying the complexity of tests, we obtain a hierarchy:
1-random ⇐ weakly 2-random ⇐ 2-random ⇐ weakly 3-random ⇐ · · ·
We can also relativize these notions. Write X = X0 ⊕ X1 ⊕ · · · ⊕ Xn−1 to mean that Xk = {i : ni + k ∈ X}. Thm (van Lambalgen). X = X0 ⊕ X1 is 1-random iff X0 and X1 are 1-random relative to each other. In this case, X0 and X1 are “a bit more random” than X. This theorem generalizes to X = X0 ⊕ X1 ⊕ · · · ⊕ Xn−1.
SLIDE 55
Upper cones and minimal pairs in the Turing degrees revisited Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0.
SLIDE 56
Upper cones and minimal pairs in the Turing degrees revisited Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0.
- Cor. If A T ∅ and X is weakly 2-random relative to A then A T X.
SLIDE 57
Upper cones and minimal pairs in the Turing degrees revisited Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0.
- Cor. If A T ∅ and X is weakly 2-random relative to A then A T X.
Cor (Kautz). If X and Y are weakly 2-random relative to each
- ther then the Turing degrees of X and Y form a minimal pair.
SLIDE 58
Upper cones and minimal pairs in the Turing degrees revisited Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0.
- Cor. If A T ∅ and X is weakly 2-random relative to A then A T X.
Cor (Kautz). If X and Y are weakly 2-random relative to each
- ther then the Turing degrees of X and Y form a minimal pair.
Let ∅′ be the Halting Problem. Thm (Kuˇ cera). If X, Y T ∅′ are 1-random, then their degrees do not form a minimal pair.
SLIDE 59
Upper cones and minimal pairs in the Turing degrees revisited Thm (de Leeuw, Moore, Shannon, and Shapiro / Sacks). If A is not computable then µ({X : A T X}) = 0.
- Cor. If A T ∅ and X is weakly 2-random relative to A then A T X.
Cor (Kautz). If X and Y are weakly 2-random relative to each
- ther then the Turing degrees of X and Y form a minimal pair.
Let ∅′ be the Halting Problem. Thm (Kuˇ cera). If X, Y T ∅′ are 1-random, then their degrees do not form a minimal pair. A is a base for 1-randomness if A T X for some X that is 1-random relative to A. Cor (Kuˇ cera). There is a noncomputable base for 1-randomness.
SLIDE 60
Upper cones and minimal pairs in the coarse degrees revisited Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If A c ∅ then µ({X : A is coarsely computable relative to X}) = 0. So proper upper cones in the coarse degrees have measure 0.
- Cor. If A is not coarsely computable and X is weakly 3-random
relative to A then X does not compute a coarse description of A.
SLIDE 61
Upper cones and minimal pairs in the coarse degrees revisited Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If A c ∅ then µ({X : A is coarsely computable relative to X}) = 0. So proper upper cones in the coarse degrees have measure 0.
- Cor. If A is not coarsely computable and X is weakly 3-random
relative to A then X does not compute a coarse description of A. Cor (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X and Y are weakly 3-random relative to each other then the coarse degrees of X and Y form a minimal pair.
SLIDE 62
Upper cones and minimal pairs in the coarse degrees revisited Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If A c ∅ then µ({X : A is coarsely computable relative to X}) = 0. So proper upper cones in the coarse degrees have measure 0.
- Cor. If A is not coarsely computable and X is weakly 3-random
relative to A then X does not compute a coarse description of A. Cor (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X and Y are weakly 3-random relative to each other then the coarse degrees of X and Y form a minimal pair. Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). There exist relatively 2-random X, Y whose (nonuniform) coarse degrees do not form a minimal pair.
SLIDE 63
Kolmogorov complexity Kolmogorov complexity measures the complexity of a finite
- bject in terms of the length of its shortest description.
SLIDE 64
Kolmogorov complexity Kolmogorov complexity measures the complexity of a finite
- bject in terms of the length of its shortest description.
We can think of σ ∈ 2<ω as random if it has no descriptions shorter than itself, and as far from random if it has a very short description.
SLIDE 65
Kolmogorov complexity Kolmogorov complexity measures the complexity of a finite
- bject in terms of the length of its shortest description.
We can think of σ ∈ 2<ω as random if it has no descriptions shorter than itself, and as far from random if it has a very short description. Let K(σ) be the prefix-free Kolmogorov complexity of σ ∈ 2<ω.
SLIDE 66
Kolmogorov complexity Kolmogorov complexity measures the complexity of a finite
- bject in terms of the length of its shortest description.
We can think of σ ∈ 2<ω as random if it has no descriptions shorter than itself, and as far from random if it has a very short description. Let K(σ) be the prefix-free Kolmogorov complexity of σ ∈ 2<ω. Let X ↾ n be the first n bits of X ∈ 2ω. Schnorr showed that X is 1-random iff K(X ↾ n) n − O(1).
SLIDE 67
K-triviality K(n) K(A ↾ n) + O(1) for any A ∈ 2ω. A is K-trivial if K(A ↾ n) K(n) + O(1). Every computable set is K-trivial.
SLIDE 68
K-triviality K(n) K(A ↾ n) + O(1) for any A ∈ 2ω. A is K-trivial if K(A ↾ n) K(n) + O(1). Every computable set is K-trivial. Solovay showed that noncomputable K-trivials exist.
SLIDE 69
K-triviality K(n) K(A ↾ n) + O(1) for any A ∈ 2ω. A is K-trivial if K(A ↾ n) K(n) + O(1). Every computable set is K-trivial. Solovay showed that noncomputable K-trivials exist. The K-trivials form a Turing ideal: they are closed downwards under T and are closed under ⊕.
SLIDE 70
K-triviality K(n) K(A ↾ n) + O(1) for any A ∈ 2ω. A is K-trivial if K(A ↾ n) K(n) + O(1). Every computable set is K-trivial. Solovay showed that noncomputable K-trivials exist. The K-trivials form a Turing ideal: they are closed downwards under T and are closed under ⊕. Thm (Nies). A is K-trivial iff every 1-random set is 1-random relative to A.
SLIDE 71
K-triviality K(n) K(A ↾ n) + O(1) for any A ∈ 2ω. A is K-trivial if K(A ↾ n) K(n) + O(1). Every computable set is K-trivial. Solovay showed that noncomputable K-trivials exist. The K-trivials form a Turing ideal: they are closed downwards under T and are closed under ⊕. Thm (Nies). A is K-trivial iff every 1-random set is 1-random relative to A. Thm (Nies and Hirschfeldt). A is K-trivial iff K(σ) K A(σ) + O(1).
SLIDE 72
K-triviality K(n) K(A ↾ n) + O(1) for any A ∈ 2ω. A is K-trivial if K(A ↾ n) K(n) + O(1). Every computable set is K-trivial. Solovay showed that noncomputable K-trivials exist. The K-trivials form a Turing ideal: they are closed downwards under T and are closed under ⊕. Thm (Nies). A is K-trivial iff every 1-random set is 1-random relative to A. Thm (Nies and Hirschfeldt). A is K-trivial iff K(σ) K A(σ) + O(1). Thm (Hirschfeldt, Nies, and Stephan). A is K-trivial iff A is a base for 1-randomness.
SLIDE 73
Algorithmically random sets as oracles Thm (Kuˇ cera / G´ acs). Every set is computable from some 1-random set.
SLIDE 74
Algorithmically random sets as oracles Thm (Kuˇ cera / G´ acs). Every set is computable from some 1-random set. Thm (Downey, Nies, Weber, and Yu). Every weakly 2-random set forms a minimal pair with ∅′, and in particular does not compute any noncomputable c.e. set.
SLIDE 75
Algorithmically random sets as oracles Thm (Kuˇ cera / G´ acs). Every set is computable from some 1-random set. Thm (Downey, Nies, Weber, and Yu). Every weakly 2-random set forms a minimal pair with ∅′, and in particular does not compute any noncomputable c.e. set. 1-randoms that do not compute ∅′ are “more random” than those that do, as quantified by Franklin and Ng.
SLIDE 76
Algorithmically random sets as oracles Thm (Kuˇ cera / G´ acs). Every set is computable from some 1-random set. Thm (Downey, Nies, Weber, and Yu). Every weakly 2-random set forms a minimal pair with ∅′, and in particular does not compute any noncomputable c.e. set. 1-randoms that do not compute ∅′ are “more random” than those that do, as quantified by Franklin and Ng. Thm (Hirschfeldt, Nies, and Stephan). If X T ∅′ is 1-random and A T X is c.e. then A is K-trivial.
SLIDE 77
Algorithmically random sets as oracles Thm (Kuˇ cera / G´ acs). Every set is computable from some 1-random set. Thm (Downey, Nies, Weber, and Yu). Every weakly 2-random set forms a minimal pair with ∅′, and in particular does not compute any noncomputable c.e. set. 1-randoms that do not compute ∅′ are “more random” than those that do, as quantified by Franklin and Ng. Thm (Hirschfeldt, Nies, and Stephan). If X T ∅′ is 1-random and A T X is c.e. then A is K-trivial. Thm (Bienvenu, Day, Greenberg, Kuˇ cera, Miller, Nies, and Turetsky). There is a 1-random X T ∅′ that computes every K-trivial set.
SLIDE 78
Subclasses of the K-trivials Many notions of randomness-theoretic weakness have been shown to be equivalent to K-triviality.
SLIDE 79
Subclasses of the K-trivials Many notions of randomness-theoretic weakness have been shown to be equivalent to K-triviality. Recently, more subclasses of the K-trivials have been uncovered.
SLIDE 80
Subclasses of the K-trivials Many notions of randomness-theoretic weakness have been shown to be equivalent to K-triviality. Recently, more subclasses of the K-trivials have been uncovered. Thm (Kuˇ cera). If X, Y T ∅′ are 1-random, then their degrees do not form a minimal pair. Let X ⊕ Y T ∅′ be 1-random and let A T X, Y.
SLIDE 81
Subclasses of the K-trivials Many notions of randomness-theoretic weakness have been shown to be equivalent to K-triviality. Recently, more subclasses of the K-trivials have been uncovered. Thm (Kuˇ cera). If X, Y T ∅′ are 1-random, then their degrees do not form a minimal pair. Let X ⊕ Y T ∅′ be 1-random and let A T X, Y. By van Lambalgen’s Theorem, X is 1-random relative to Y, and hence relative to A.
SLIDE 82
Subclasses of the K-trivials Many notions of randomness-theoretic weakness have been shown to be equivalent to K-triviality. Recently, more subclasses of the K-trivials have been uncovered. Thm (Kuˇ cera). If X, Y T ∅′ are 1-random, then their degrees do not form a minimal pair. Let X ⊕ Y T ∅′ be 1-random and let A T X, Y. By van Lambalgen’s Theorem, X is 1-random relative to Y, and hence relative to A. Thus A is a base for 1-randomness, and hence is K-trivial.
SLIDE 83
Subclasses of the K-trivials Many notions of randomness-theoretic weakness have been shown to be equivalent to K-triviality. Recently, more subclasses of the K-trivials have been uncovered. Thm (Kuˇ cera). If X, Y T ∅′ are 1-random, then their degrees do not form a minimal pair. Let X ⊕ Y T ∅′ be 1-random and let A T X, Y. By van Lambalgen’s Theorem, X is 1-random relative to Y, and hence relative to A. Thus A is a base for 1-randomness, and hence is K-trivial. Thm(Bienvenu, Greenberg, Kuˇ cera, Nies, and Turetsky). There is a K-trivial that is not computable from both halves of a 1-random.
SLIDE 84
Algorithmic randomness, K-triviality, and information coding X c = {A : A is computable from each coarse description of X}. If X is random, we expect X not to code information robustly, and hence expect X c to be trivial.
SLIDE 85
Algorithmic randomness, K-triviality, and information coding X c = {A : A is computable from each coarse description of X}. If X is random, we expect X not to code information robustly, and hence expect X c to be trivial. Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X is 1-random then X c ⊆ K-trivials.
SLIDE 86
Algorithmic randomness, K-triviality, and information coding X c = {A : A is computable from each coarse description of X}. If X is random, we expect X not to code information robustly, and hence expect X c to be trivial. Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X is 1-random then X c ⊆ K-trivials.
- Cor. If X is 1-random then X c = computable.
SLIDE 87
Algorithmic randomness, K-triviality, and information coding X c = {A : A is computable from each coarse description of X}. If X is random, we expect X not to code information robustly, and hence expect X c to be trivial. Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X is 1-random then X c ⊆ K-trivials.
- Cor. If X is 1-random then X c = computable.
Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X T ∅′ is 1-random then X c has a noncomputable element.
SLIDE 88
Algorithmic randomness, K-triviality, and information coding X c = {A : A is computable from each coarse description of X}. If X is random, we expect X not to code information robustly, and hence expect X c to be trivial. Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X is 1-random then X c ⊆ K-trivials.
- Cor. If X is 1-random then X c = computable.
Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). If X T ∅′ is 1-random then X c has a noncomputable element. Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). There is a K-trivial that is not in X c for any 1-random X.
SLIDE 89
Robust codability and K-triviality Let C = {X c : X is 1-random}. As mentioned above, if A ∈ C then A is K-trivial.
SLIDE 90
Robust codability and K-triviality Let C = {X c : X is 1-random}. As mentioned above, if A ∈ C then A is K-trivial. The proof shows that there is a splitting X = X1 ⊕ · · · ⊕ Xn s.t. A is computable from every join of n − 1 many Xi.
SLIDE 91
Robust codability and K-triviality Let C = {X c : X is 1-random}. As mentioned above, if A ∈ C then A is K-trivial. The proof shows that there is a splitting X = X1 ⊕ · · · ⊕ Xn s.t. A is computable from every join of n − 1 many Xi. By van Lambalgen’s Theorem, such a join is 1-random, so A is a base for 1-randomness, and hence is K-trivial.
SLIDE 92
k n-bases
A is a k
n-base if there is a 1-random X1 ⊕ · · · ⊕ Xn s.t. A is
computable from every join of k many Xi.
SLIDE 93
k n-bases
A is a k
n-base if there is a 1-random X1 ⊕ · · · ⊕ Xn s.t. A is
computable from every join of k many Xi. As mentioned above, if A ∈ C then A is an n−1
n -base for some n.
SLIDE 94
k n-bases
A is a k
n-base if there is a 1-random X1 ⊕ · · · ⊕ Xn s.t. A is
computable from every join of k many Xi. As mentioned above, if A ∈ C then A is an n−1
n -base for some n.
Thm (Greenberg, Miller, and Nies). A ∈ C iff A is an n−1
n -base for
some n.
SLIDE 95
k n-bases
A is a k
n-base if there is a 1-random X1 ⊕ · · · ⊕ Xn s.t. A is
computable from every join of k many Xi. As mentioned above, if A ∈ C then A is an n−1
n -base for some n.
Thm (Greenberg, Miller, and Nies). A ∈ C iff A is an n−1
n -base for
some n. Thm (Greenberg, Miller, and Nies). The class of k
n-bases depends
- nly on the value of k
n as a rational.
SLIDE 96
k n-bases
A is a k
n-base if there is a 1-random X1 ⊕ · · · ⊕ Xn s.t. A is
computable from every join of k many Xi. As mentioned above, if A ∈ C then A is an n−1
n -base for some n.
Thm (Greenberg, Miller, and Nies). A ∈ C iff A is an n−1
n -base for
some n. Thm (Greenberg, Miller, and Nies). The class of k
n-bases depends
- nly on the value of k
n as a rational.
Thm (Greenberg, Miller, and Nies). The p-bases for p ∈ Q ∩ (0, 1) form a proper hierarchy of subideals of the K-trivials. Thm (Hirschfeldt, Jockusch, Kuyper, and Schupp). The union of this hierarchy is also a proper subset of the K-trivials.
SLIDE 97