Brownian Motion and Kolmogorov Complexity Bjrn Kjos-Hanssen - - PowerPoint PPT Presentation

brownian motion and kolmogorov complexity
SMART_READER_LITE
LIVE PREVIEW

Brownian Motion and Kolmogorov Complexity Bjrn Kjos-Hanssen - - PowerPoint PPT Presentation

Brownian Motion and Kolmogorov Complexity Bjrn Kjos-Hanssen University of Hawaii at Manoa Logic Colloquium 2007 The Church-Turing thesis (1930s) The Church-Turing thesis (1930s) A function f : N N is computable by an algorithm f


slide-1
SLIDE 1

Brownian Motion and Kolmogorov Complexity

Bjørn Kjos-Hanssen

University of Hawaii at Manoa

Logic Colloquium 2007

slide-2
SLIDE 2

The Church-Turing thesis (1930s)

slide-3
SLIDE 3

The Church-Turing thesis (1930s)

◮ A function f : N → N is computable by an algorithm ⇔ f is

computable by a Turing machine.

slide-4
SLIDE 4

The Church-Turing thesis (1930s)

◮ A function f : N → N is computable by an algorithm ⇔ f is

computable by a Turing machine.

◮ “Algorithm”: an informal, intuitive concept.

slide-5
SLIDE 5

The Church-Turing thesis (1930s)

◮ A function f : N → N is computable by an algorithm ⇔ f is

computable by a Turing machine.

◮ “Algorithm”: an informal, intuitive concept. ◮ “Turing machine”: a precise mathematical concept.

slide-6
SLIDE 6

Random real numbers

slide-7
SLIDE 7

Random real numbers

◮ A number is random if it belongs to no set of measure zero.

(?)

slide-8
SLIDE 8

Random real numbers

◮ A number is random if it belongs to no set of measure zero.

(?)

◮ But for any number x, the singleton set {x} has measure zero.

slide-9
SLIDE 9

Random real numbers

◮ A number is random if it belongs to no set of measure zero.

(?)

◮ But for any number x, the singleton set {x} has measure zero. ◮ Must restrict attention to a countable collection of measure

zero sets.

slide-10
SLIDE 10

Random real numbers

◮ A number is random if it belongs to no set of measure zero.

(?)

◮ But for any number x, the singleton set {x} has measure zero. ◮ Must restrict attention to a countable collection of measure

zero sets.

◮ The “computable” measure zero sets. Various definitions.

slide-11
SLIDE 11

Random real numbers

◮ A number is random if it belongs to no set of measure zero.

(?)

◮ But for any number x, the singleton set {x} has measure zero. ◮ Must restrict attention to a countable collection of measure

zero sets.

◮ The “computable” measure zero sets. Various definitions. ◮ Definition of random real numbers motivated by the

Church-Turing thesis.

slide-12
SLIDE 12

Mathematical Brownian Motion

◮ The basic process in modeling of the stock market in

Mathematical Finance, and important in physics and biology.

slide-13
SLIDE 13

Brownian Motion

Figure: Botanist Robert Brown (1773-1858)

slide-14
SLIDE 14

Brownian Motion

Figure: Botanist Robert Brown (1773-1858)

Pollen grains suspended in water perform a continued swarming motion.

slide-15
SLIDE 15

Brownian Motion?

Figure: The fluctuations of the CAC40 index

slide-16
SLIDE 16

Mathematical Brownian Motion

A path of Brownian motion is a function f ∈ C[0, 1] or f ∈ C(R) that is typical with respect to Wiener measure.

slide-17
SLIDE 17

Mathematical Brownian Motion

The Wiener measure is characterized by the following properties.

slide-18
SLIDE 18

Mathematical Brownian Motion

The Wiener measure is characterized by the following properties.

◮ Independent increments. f (1999) − f (1996) and

f (2005) − f (2003) are independent random variables. But f (1999) and f (2005) are not independent.

slide-19
SLIDE 19

Mathematical Brownian Motion

The Wiener measure is characterized by the following properties.

◮ Independent increments. f (1999) − f (1996) and

f (2005) − f (2003) are independent random variables. But f (1999) and f (2005) are not independent.

◮ f (t) is a normally distributed random variable with variance t

and mean 0.

slide-20
SLIDE 20

Mathematical Brownian Motion

The Wiener measure is characterized by the following properties.

◮ Independent increments. f (1999) − f (1996) and

f (2005) − f (2003) are independent random variables. But f (1999) and f (2005) are not independent.

◮ f (t) is a normally distributed random variable with variance t

and mean 0.

◮ Stationarity. f (1) and f (2006) − f (2005) have the same

probability distribution.

slide-21
SLIDE 21

Brownian Motion and Random Real Numbers

slide-22
SLIDE 22

Brownian Motion and Random Real Numbers

◮ Definition of Martin-L¨

  • f random continuous functions with

respect to Wiener measure: Asarin (1986).

slide-23
SLIDE 23

Brownian Motion and Random Real Numbers

◮ Definition of Martin-L¨

  • f random continuous functions with

respect to Wiener measure: Asarin (1986).

◮ Work by Asarin, Pokrovskii, Fouch´

e.

slide-24
SLIDE 24

Khintchine’s Law of the Iterated Logarithm

The Law of the Iterated Logarithm holds for f ∈ C[0, 1] at t ∈ [0, 1] if lim sup

h→0

|f (t + h) − f (t)|

  • 2|h| log log(1/|h|)

= 1.

slide-25
SLIDE 25

Theorem (Khintchine)

Fix t. Then almost surely, the LIL holds at t.

slide-26
SLIDE 26

Theorem (Khintchine)

Fix t. Then almost surely, the LIL holds at t.

Corollary (by Fubini’s Theorem)

Almost surely, the LIL holds almost everywhere.

slide-27
SLIDE 27

Theorem (Khintchine)

Fix t. Then almost surely, the LIL holds at t.

Corollary (by Fubini’s Theorem)

Almost surely, the LIL holds almost everywhere.

Theorem (K and Nerode, 2006)

For each Schnorr random Brownian motion, the LIL holds almost everywhere. This answered a question of Fouch´ e.

slide-28
SLIDE 28

Theorem (Khintchine)

Fix t. Then almost surely, the LIL holds at t.

Corollary (by Fubini’s Theorem)

Almost surely, the LIL holds almost everywhere.

Theorem (K and Nerode, 2006)

For each Schnorr random Brownian motion, the LIL holds almost everywhere. This answered a question of Fouch´ e.

◮ Method: use Wiener-Carath´

eodory measure algebra isomorphism theorem to translate the problem from C[0, 1] into more familiar terrain: [0, 1].

slide-29
SLIDE 29

f (1

2) < 5

f (1

2) ≥ 5

slide-30
SLIDE 30

f (1

2) < 5

f (1

2) ≥ 5

slide-31
SLIDE 31

f (1

2) < 5

f (1

2) ≥ 5

f (2

3) < −9

f (1

2) < 5

slide-32
SLIDE 32

f (1

2) < 5

f (1

2) ≥ 5

f (2

3) < −9

f (1

2) < 5

f (2

3) ≥ −9

f (1

2) < 5

slide-33
SLIDE 33

f (1

2) < 5

f (1

2) ≥ 5

f (2

3) < −9

f (2

3) < −9

f (1

2) < 5

f (1

2) ≥ 5

f (2

3) ≥ −9

f (2

3) ≥ −9

f (1

2) < 5

f (1

2) ≥ 5

slide-34
SLIDE 34

Kolmogorov complexity

slide-35
SLIDE 35

Kolmogorov complexity

◮ The complexity K(σ) of a binary string σ is the length of the

shortest description of σ by a fixed universal Turing machine having prefix-free domain.

slide-36
SLIDE 36

Kolmogorov complexity

◮ The complexity K(σ) of a binary string σ is the length of the

shortest description of σ by a fixed universal Turing machine having prefix-free domain.

◮ For a real number x = 0.x1x2 · · · we can look at the

complexity of the prefixes x0 · · · xn.

slide-37
SLIDE 37

Definition

Let f ∈ C[0, 1], t ∈ [0, 1], and c ∈ R. t is a c-fast time of f if lim sup

h→0

|f (t + h) − f (t)|

  • 2|h| log 1/|h|

≥ c. t is a c-slow time of f if lim sup

h→0

|f (t + h) − f (t)| √ h ≤ c.

slide-38
SLIDE 38

Definition

Let f ∈ C[0, 1], t ∈ [0, 1], and c ∈ R. t is a c-fast time of f if lim sup

h→0

|f (t + h) − f (t)|

  • 2|h| log 1/|h|

≥ c. t is a c-slow time of f if lim sup

h→0

|f (t + h) − f (t)| √ h ≤ c.

◮ Both slow and fast times almost surely exist (and form dense

sets) [Orey and Taylor 1974, Davis, Greenwood and Perkins 1983].

slide-39
SLIDE 39

Slow times

◮ No time given in advance is slow, but the set of slow times

has positive Hausdorff dimension.

slide-40
SLIDE 40

Slow times

◮ No time given in advance is slow, but the set of slow times

has positive Hausdorff dimension.

◮ Any set of positive Hausdorff dimension contains some times

  • f high Kolmogorov complexity.
slide-41
SLIDE 41

Slow times

◮ No time given in advance is slow, but the set of slow times

has positive Hausdorff dimension.

◮ Any set of positive Hausdorff dimension contains some times

  • f high Kolmogorov complexity.

◮ But actually, all slow points have high Kolmogorov complexity.

slide-42
SLIDE 42

Slow times

◮ No time given in advance is slow, but the set of slow times

has positive Hausdorff dimension.

◮ Any set of positive Hausdorff dimension contains some times

  • f high Kolmogorov complexity.

◮ But actually, all slow points have high Kolmogorov complexity. ◮ Can prove this using either computability theory or probability

theory.

slide-43
SLIDE 43

Definition

A set is c.e. if it is computably enumerable.

slide-44
SLIDE 44

Definition

A set is c.e. if it is computably enumerable. A set A ⊆ N is infinitely often c.e. traceable if there is a computable function p(n) such that for all f : N → N, if f is computable in A then there is a uniformly c.e. sequence of finite sets En of size ≤ p(n) such that ∃∞n f (n) ∈ En.

slide-45
SLIDE 45

Definition

An infinite binary sequence x is autocomplex if there is a function f : N → N with limn f (n) = ∞, f computable from x, and K(x ↾ n) ≥ f (n).

slide-46
SLIDE 46

Definition

An infinite binary sequence x is autocomplex if there is a function f : N → N with limn f (n) = ∞, f computable from x, and K(x ↾ n) ≥ f (n). A sequence x is Martin-L¨

  • f random if x ∈ ∩nUn for any uniformly

Σ0

1 sequence of open sets Un with µUn ≤ 2−n.

slide-47
SLIDE 47

Definition

An infinite binary sequence x is autocomplex if there is a function f : N → N with limn f (n) = ∞, f computable from x, and K(x ↾ n) ≥ f (n). A sequence x is Martin-L¨

  • f random if x ∈ ∩nUn for any uniformly

Σ0

1 sequence of open sets Un with µUn ≤ 2−n.

A sequence x is Kurtz random if x ∈ C for any Π0

1 class C of

measure 0.

slide-48
SLIDE 48

Theorem (K, Merkle, Stephan)

x is infinitely often c.e. traceable iff x is not autocomplex.

slide-49
SLIDE 49

Theorem (K, Merkle, Stephan)

x is infinitely often c.e. traceable iff x is not autocomplex.

Lemma

If x is not autocomplex then every Martin-L¨

  • f random real is

Kurtz-random relative to x.

slide-50
SLIDE 50

Theorem (K, Merkle, Stephan)

x is infinitely often c.e. traceable iff x is not autocomplex.

Lemma

If x is not autocomplex then every Martin-L¨

  • f random real is

Kurtz-random relative to x. This translates to:

◮ If t ∈ [0, 1] is not of high Kolmogorov complexity then each

sufficiently random f ∈ C[0, 1] is such that t is not a slow point of f . Thus we have a computability-theoretic proof that all slow points are almost surely of high Kolmogorov complexity.

slide-51
SLIDE 51

Theorem (K, Merkle, Stephan)

x is infinitely often c.e. traceable iff x is not autocomplex.

Lemma

If x is not autocomplex then every Martin-L¨

  • f random real is

Kurtz-random relative to x. This translates to:

◮ If t ∈ [0, 1] is not of high Kolmogorov complexity then each

sufficiently random f ∈ C[0, 1] is such that t is not a slow point of f . Thus we have a computability-theoretic proof that all slow points are almost surely of high Kolmogorov complexity. There are also probability-theoretic methods for proving such things, that can even yield stronger results.

slide-52
SLIDE 52

Theorem (K, Merkle, Stephan)

x is infinitely often c.e. traceable iff x is not autocomplex.

Lemma

If x is not autocomplex then every Martin-L¨

  • f random real is

Kurtz-random relative to x. This translates to:

◮ If t ∈ [0, 1] is not of high Kolmogorov complexity then each

sufficiently random f ∈ C[0, 1] is such that t is not a slow point of f . Thus we have a computability-theoretic proof that all slow points are almost surely of high Kolmogorov complexity. There are also probability-theoretic methods for proving such things, that can even yield stronger results. On the other hand, these methods can be applied to computability-theoretic problems.

slide-53
SLIDE 53

Two notions of random closed set

Two probability distributions on closed subsets of Cantor space.

  • 1. “Random closed set” (Barmpalias, Brodhead, Cenzer, Dashti,

and Weber (2007)). 1/3 probability each of: keeping only left branch, keeping only right branch, keeping both branches.

  • 2. Percolation limit set (Hawkes, R. Lyons (1990)). 2/3

probability of keeping the left branch, and independently 2/3 probability of keeping the right branch.

slide-54
SLIDE 54

Bits:

slide-55
SLIDE 55

Bits: 1

slide-56
SLIDE 56

Bits: 12

slide-57
SLIDE 57

Bits: 120

slide-58
SLIDE 58

Bits: 1201

slide-59
SLIDE 59

Bits: 12011

slide-60
SLIDE 60

Bits: 120112

slide-61
SLIDE 61

Bits: 1201121

slide-62
SLIDE 62

Bits: 12011212

slide-63
SLIDE 63

Bits: 120112120

slide-64
SLIDE 64

Let γ = log2(3/2) and α = 1 − γ = log2(4/3). Barmpalias, Brodhead, Cenzer, Dashti, and Weber define (Martin-L¨

  • f-)random closed sets and show that they all have

dimension α. We denote Hausdorff dimension by dim and effective Hausdorff dimension by dim∅. Then dim∅(x) = lim inf

n

K(x ↾ n) n = sup{s : x is s-Martin-L¨

  • f-random}.
slide-65
SLIDE 65

We define a strengthening of Reimann and Stephan’s strong γ-randomness, vehement γ-randomness. Both notions coincide with Martin-L¨

  • f γ-randomness for γ = 1.

Definition

Let ρ : 2<ω → R, ρ(σ) = 2−|σ|γ for some fixed γ ∈ [0, 1]. For a set

  • f strings V ,

ρ(V ) :=

  • σ∈V

ρ(σ) and [V ] :=

  • {[σ] : σ ∈ V }

.

slide-66
SLIDE 66

Definition

A ML-γ-test is a uniformly c.e. sequence (Un)n<ω of sets of strings such that for all n, ρ(Un) ≤ 2−n. A strong ML-γ-test is a uniformly c.e. sequence (Un)n<ω of sets of strings such that (∀n)(∀V ⊆ Un)[V prefix-free ⇒ ρ(V ) ≤ 2−n]. A vehement ML-γ-test is a uniformly c.e. sequence (Un)n<ω such that for each n there is a set of strings Vn with [Vn] = [Un] and ρ(V ) ≤ 2−n.

slide-67
SLIDE 67

Lemma

Vehemently γ-random ⇒ strongly γ-random ⇒ γ-random.

slide-68
SLIDE 68

Theorem

Let γ = log2(3/2) and let x be a real. We have (1)⇔(2)⇒(3)⇒(4)⇒(5).

  • 1. x is 1-random;
  • 2. x is vehemently 1-random;
  • 3. x is vehemently γ + 1−γ

2

≈ 0.8-random;

  • 4. x belongs to some random closed set;
  • 5. x is vehemently γ ≈ 0.6-random.

Corollary (J. Miller and A. Mont´ alban)

The implication from (1) to (4).

slide-69
SLIDE 69

Theorem

Suppose x is a member of a random closed set. Then x is vehemently γ-random. Proof: Random closed sets are denoted by Γ, whereas S is the set

  • f strings in the tree corresponding to Γ.

Let i < 2 and σ ∈ 2<ω. The probability that the concatenation σi ∈ S given that σ ∈ S is, by definition of the BBCDW model, P{σi ∈ S|σ ∈ S} = 2 3. Hence the absolute probability that σ survives is P{σ ∈ S} = 2 3 |σ| =

  • 2−γ|σ| =
  • 2−|σ|γ
slide-70
SLIDE 70

Suppose x is not vehemently γ-random. So there is some uniformly c.e. sequence Un = {σn,i : i < ω}, such that x ∈ ∩n[Un], and for some U′

n = {σ′ n,i : i < ω} with [U′ n] = [Un], ∞

  • i=1

2−|σ′

n,i|γ ≤ 2−n.

Let Vn := {Γ : ∃i σn,i ∈ S} = {Γ : ∃i σ′

n,i ∈ S}.

The first expression shows Vn is uniformly Σ0

  • 1. The equality is

proved using the fact that S is a tree without dead ends.

slide-71
SLIDE 71

Now PVn ≤

  • i∈ω

P{σ′

n,i ∈ S} =

  • i∈ω

2−|σ′

n,i|γ ≤ 2−n.

That is, if x ∈ Γ then x belongs to the effective null set ∩n∈ωVn. As Γ is ML-random, this is not the case. End of proof.

slide-72
SLIDE 72

Corollary

If x belongs to a random closed set, then dim∅(x) ≥ log2(3/2).

Corollary (BBCDW)

No member of a random closed set is 1-generic.

Theorem

For each ε > 0, each random closed set contains a real x with dim∅(x) ≤ log2(3/2) + ε.

Corollary (BBCDW)

Not every member of a random closed set is Martin-L¨

  • f random.
slide-73
SLIDE 73

Open problems

We have seen that the members of random closed sets do not coincide with the reals of effective dimension ≥ γ, although (1) they all have dimension ≥ γ and (2) they do not all have dimension ≥ γ + ε for any fixed ǫ > 0. There are (at least) two possible conjectures, and the answer may help determine whether vehement or ordinary γ-randomness is the most natural generalization of 1-randomness.

Conjecture (1)

The members of random closed sets are exactly the reals x such that for some ε > 0, x is γ + ε-random. (That is, x has effective dimension > γ.)

Conjecture (2)

The members of random closed sets are exactly the reals x such that for some ε > 0, x is vehemently γ + ε-random.

slide-74
SLIDE 74

Conjecture 1 would imply that γ + ε-random ⇒ vehemently γ-random. This seems unlikely, but J. Reimann has shown that γ + ε-random ⇒ strongly γ-random.

slide-75
SLIDE 75

Conjecture 1 would imply that γ + ε-random ⇒ vehemently γ-random. This seems unlikely, but J. Reimann has shown that γ + ε-random ⇒ strongly γ-random. Thank You