Randomness and analysis: a tutorial Part I: Randomness notions and - - PowerPoint PPT Presentation

randomness and analysis a tutorial
SMART_READER_LITE
LIVE PREVIEW

Randomness and analysis: a tutorial Part I: Randomness notions and - - PowerPoint PPT Presentation

Randomness and analysis: a tutorial Part I: Randomness notions and almost everywhere theorems Andr e Nies CCC 2015, Kochel am See 1/1 Di ff erentiability Di ff erentiability of a function f at a real z means that the rate of change


slide-1
SLIDE 1

Randomness and analysis: a tutorial

Part I: Randomness notions and almost everywhere theorems

Andr´ e Nies

CCC 2015, Kochel am See

1/1

slide-2
SLIDE 2

Differentiability

Differentiability of a function f at a real z means that the rate

  • f change (“velocity”) at z is defined:

f0(z) = lim

h!0

f(z + h) f(h) h . Weierstrass proved in 1872 that some continuous function is nowhere differentiable.

2/1

slide-3
SLIDE 3

Lebesgue’s measure

I In 1904 Lebesgue introduced his measure on the real line R. I It assigns a size λ(C) 2 [0, 1] to all reasonable subsets C of R. I One can now say that a property holds for almost every real z: the set

  • f exceptions has measure 0.

3/1

slide-4
SLIDE 4

“Almost everywhere” theorem (A)

Some important theorems in analysis assert a property of being well-behaved for almost every real z. For instance, in contrast to Weierstrass’ result, we have: Theorem (Lebesgue, 1904) Let f : [0, 1] ! R be non-decreasing. Then the derivative f0(z) exists for almost every real z.

4/1

slide-5
SLIDE 5

“Almost everywhere” theorem (B)

HENRI LEBESGUE, Sur l’ int´ egration des fonctions discontinues, Annales scientifiques de l’E.N.S. 3e s´ erie, tome 27 (1910), p. 361-450; p. 407.

SUR I/INTÉGRATION DES FONCTIONS DISCONTINUES.

4^7

à droite supérieure à

  • (donc différente de zéro puisque K est quel-

conque) forment un ensemble de mesure nulle. Soient maintenant un ensemble quelconque A, E un ensemble d'intervalles contenant A; les points extérieurs à E en lesquels A n'a pas une densité à droite nulle forment un ensemble de mesure nulle; et cela étant vrai quel que soit E contenant A est vrai aussi des points extérieurs à A. Soit B le complémentaire de A par rapport à un certain intervalle; les points de B en lesquels A n'a pas une densité à droite nulle forment, on vient de le voir, un ensemble de mesure nulle. Permutons A et B dans l'énoncé de ce résultat, il reste vrai; or A +• B a une densité égale à i en tout point; donc, les points d'un ensemble de mesure nulle étant exceptés, la densité à droite de A est égale à un en tout point de A, égale à zéro en tout point de B. Raisonnant de même sur la densité à gauche, on voit finalement que la densité d'un ensemble mesurable est égaie à un en presque tous les points de cet ensemble.

  • 34. C'est-à-dire qu'il est démontré qu'une fonction est presque

partout la dérivée de son intégrale indéfinie, lorsqu'il s'agit d'une fonction ne prenant que les valeurs o ou i.

Par suite, ce théorème s'en déduit quand il s'agit de fonctions ne prenant qu'un nombre fini de valeurs différentes. Passons au cas général et supposons qu'il s'agisse d'une fonction qui n'est jamais négative/. Soit E^ l'ensemble des points en lesquels on a

p£^f<.(p -+-!)£ (P=0, I, ...); £ est une quantité positive arbitraire. Soit y^ la fonction égale h p e

dans E^,, pour p =

  • , i, ..., n et égale a zéro ailleurs. Les nombres

dérivés de l'intégrale de/sont au moins égaux à ceux de l'intégrale de y/,; donc, presque partout, les nombres dérivés de l'intégrale de/ sont égaux ou supérieurs à

/.

Soit $ la fonction égale à (p 4- i) s dans chaque B^. Les nombres dérivés de f fdx ne surpassant pas ceux de ^dx, il suffirait de démontrer le théorème pour la fonction $. Or, si <&^ est la fonction définie comme étant égale à zéro dans E^ 4- E^... + E^ et égale à <&

Theorem (Lebesgue Density Theorem, 1910) Let E ✓ [0, 1] be measurable. For almost every z 2 [0, 1]: if z 2 E, then E has density 1 at z. Intuitively, this means that as we “zoom in” on z, more and more of the neighbourhood of z is in E.

5/1

slide-6
SLIDE 6

Classically (A) implies (B)

(A) Let f : [0, 1] ! R be non-decreasing. Then the derivative f0(z) exists for almost every real z. (B) Let E ✓ [0, 1] be measurable. For almost every z 2 [0, 1]: if z 2 E, then E has density 1 at z.

I Recall that λ(C) denotes the Lebesgue measure of C ✓ R. I The non-decreasing function x ! λ([0, x] \ E) is

differentiable at almost every x. Its derivative is the density at x.

I By the regularity of Lebesgue measure, it is sufficient to

prove (B) for closed sets E. For such a set it is easy to see that the upper density is 1 at almost every x 2 E. Hence the full density is 1 for a.e. x 2 E.

6/1

slide-7
SLIDE 7

Functions of bounded variation

A function f : [a, b] ! R is of bounded variation if V (f) = sup

n1

X

i=1

|f(ti+1) f(ti)| < 1, the sup taken over all collections t1  t2  . . .  tn in [a, b]. Examples/Non-examples BV: non-decreasing functions, Lipschitz functions, x2 sin(1/x) (and 0 at x = 0). Not BV: x sin(1/x) .

7/1

slide-8
SLIDE 8

How to obtain the result for BV functions from the result for non-decreasing functions

Theorem (Lebesgue, 1904, together with Jordan, 1879) Let f : [0, 1] ! R be of bounded variation. Then f0(r) exists for each r outside a set of measure 0 (which depends on f). To see this, use Jordan’s theorem (Cours d’analyse de l’Ecole Polytechnique, 1882-7): Each function f of bounded variation is of the form g0 g1 for nondecreasing functions g0, g1. g0(x) is the variation of f [0,x], and g1 = g0 f.

8/1

slide-9
SLIDE 9

History of constructive approaches to the result (1)

Bishop (1967) gives constructive version of the result that a BV function is differentiable at almost every real. (Foundations of constructive analysis, Thm. 7 on page 239.)

9/1

slide-10
SLIDE 10

History of constructive approaches to the result (2)

Demuth (1975) proves the following. Suppose that f is Markov computable: from a computable name for x we can obtain a computable name for f(x). Then f is pseudo-differentiable at each Π2 number (in classical language: at each Martin-L¨

  • f

random). Also, outside a certain null set, for each ∆0

2 real x, f0(x) is ∆0 2

and can be computed from x. See Thm. 4.1 in “Demuth’s path to randomness” by Kuˇ cera, N. and Porter, BSL 2015, arxiv.org/abs/1404.4449

10/1

slide-11
SLIDE 11

Demuth’s original 1975 result on BV functions

11/1

slide-12
SLIDE 12
  • 2. A brief introduction to

algorithmic randomness

10100111000101111010101000010101101111011000010111101010 10010101100011111010110001100111111101100000111001111000 00110011011110100011110100011100101011011001011100010110 01100110001111000010011001011101100100101000001110001111 11100100011000101111110100010111110011011100100110011010 00111111011010101101001101010110000011000001001101011100 . . .

12/1

slide-13
SLIDE 13

Idea in algorithmic randomness

I One defines a notion of algorithmic null set on [0, 1], or the Cantor space 2N. I A real z (bit sequence Z 2 2N) is random in a particular sense if it avoids all null sets of this kind. I There are only countably many null sets of this kind. So almost every z is random in that sense. Randomness notions relevant in this first part of the tutorial: Martin-L¨

  • f random ) computably random ) Schnorr random.

These implications are proper.

13/1

slide-14
SLIDE 14

Betting on a bit sequence

Computable betting strategies (martingales) are computable functions M from binary strings to the non-negative reals. I Let Z be a sequence of bits (often called “set”, i.e. subset

  • f N). When the player has seen the string σ of the first n

bits of Z, she can make a bet q, where 0  q  M(σ), on what the next bit Z(n) is. I If she is right, she gets q. Otherwise she loses q. Thus, we have M(σ0) + M(σ1) = 2M(σ) for each string σ. I She wins on Z if M is unbounded along Z. (These Z form an algorithmic null set.)

14/1

slide-15
SLIDE 15

Computable randomness for bit sequences

A betting strategy M satisfies the “fairness condition” that the average of the values of the children is the value at the node. We call a sequence of bits computably random if no computable betting strategy (martingale) has unbounded capital along the sequence. 2.3 . . . 1.5

<

1

"

0.7 . . . 1

F

1

0.8 . . . 0.5

<

1

"

0.2 . . .

15/1

slide-16
SLIDE 16

Martin-L¨

  • f’s 1966 randomness notion for reals

I A Martin-L¨

  • f test is an effective

sequence (Um)m2N of open sets in [0, 1] such that the Lebesgue measure of Um is at most 2m (Schnorr rd: = 2m). I Intuitively, Um is an attempt to approximate a real z with accuracy 2m. I Z passes the test if Z is not in all Um. I Z is called Martin-L¨

  • f random if it

passes all ML-tests.

1

...

U0 U1 U2 U3 U4

16/1

slide-17
SLIDE 17

Randomness via effective Vitali covers

Let hGkik2N be a computable sequence of rational open intervals with |Gk| ! 0. The set of points Vitali covered by hGkik2N is VhGkik2N = {z : z is in infinitely many Gk’s}. Martin-L¨

  • f and Schnorr randomness also can be defined via

effective Vitali covers. I Martin-L¨

  • f random: not in any set VhGkik2N where

P

k |Gk| < 1. (See Solovay tests.)

I Schnorr random: not in any set VhGkik2N where P

k |Gk| is

a computable real.

17/1

slide-18
SLIDE 18

ML- and Schnorr randomness via martingales

An infinite sequence Z of bits can be “identified” with the real number z = 0.Z in [0, 1] via the binary expansion. So we already have a definition of ML-randomness for bit sequences. Equivalently we can use martingales. A martingale L is called left-c.e. (or lower semicomputable) if L(σ) is a left-c.e. real uniformly in σ. Z is ML-random , no left-c.e. martingale succeeds on Z. A martingale L succeeds strongly on Z if there is an order function (i.e. computable, unbounded, nondecreasing) h such that 91n L(Z n) h(n). Z is Schnorr-random , no computable martingale succeeds strongly on Z.

18/1

slide-19
SLIDE 19

The implications are proper (1)

Martin-L¨

  • f random

)

6( computably random A set C is called high if ;00 T C0. Equivalently, C computes a function that dominates each computable function (Martin, 1966). Theorem (N., Stephan, Terwijn, 2005) Every high set C Turing computes a set Z that is computably random. I Let hLeie2N be a list of all partial computable martingales, I Define Z so that the martingale L = P

e 2eLe is bounded

along Z. I Use highness of C to deal with partiality.

19/1

slide-20
SLIDE 20

The implications are proper (2)

On the other hand, if a computably enumerable set C is Turing above a random, then C is Turing equivalent to the halting problem ;0 by the “Arslanov Completeness Critierion”. There is a high computably enumerable set C <T ;0. Therefore Martin-L¨

  • f random 6( computably random

Another way to separate the ML and computable randomness: use the (prefix-free) Kolmogorov complexity of the initial segments. For ML-random Z we have K(Z n) + O(1) n. There is a computably random Y such that K(Y n) = O(log n).

20/1

slide-21
SLIDE 21

The implications are proper (3)

computably random

)

6( Schnorr random. I First proved by Yongge Wang. I It is shown by a direct construction (see e.g. N’s book “Computability and Randomness”, Ch. 7). I N., Stephan, Terwijn, 2005 separate the two notions in each high degree. Note that any separation has to occur within the high degrees: Theorem (N., Stephan, Terwijn, 2005) If Z is not high and Schnorr random, then Z is ML-random.

21/1

slide-22
SLIDE 22
  • 3. Effective versions of almost everywhere

theorems

22/1

slide-23
SLIDE 23

Effective almost everywhere theorems and randomness

The “almost everywhere” theorems didn’t tell us whether the given object is well-behaved at a particular real. Now consider the case where the given object is algorithmic in some sense. I How strong an algorithmic randomness notion for a real z is needed to make the theorem hold at z? I Will the theorem in fact characterize the randomness notion? Once this is settled, we can provide “concrete” examples of reals at which the nice behaviour occurs. For instance, Chaitin’s Ω is ML-random.

23/1

slide-24
SLIDE 24

Continuing the story of effective a.e. theorems, after Bishop (1967) and Demuth (1975)

Recall Birkhoff’s 1939 theorem: Let (X, µ, T) be a measure preserving system, and let f : X ! R is measurable. For µ-almost every x, the limit as N ! 1 of the averages of f T i(x) over 0  i < N, exists.

I V’yugin, 1999 (TCS) shows that ML-randomness suffices for

the effective Birkhoff theorem. (Note that T : ✓ X ! X

  • nly needs to be defined µ-a.e.)

I He uses Bishop, Thm. 6 on page 236, which is closely

related to his result on BV (Thm. 7).

I Hoyrup, Rojas, Galatolo 2010-13 develop effective ergodic

theory.

24/1

slide-25
SLIDE 25

Schnorr randomness and L1-computability

Pathak (2009), Pathak, Rojas, and Simpson (2012) proved an effective version of another of Lebesgue’s theorem (but taking into account only the existence of limits, not the value). z 2 [0, 1]d is Schnorr random , for every L1-computable function g: [0, 1]d ! R, lim

r!0+

1 λ(Br(z)) Z

Br(z)

g exists. Implication ( also due to Freer, Kjos-Hanssen, N., Stephan.

25/1

slide-26
SLIDE 26

Effective form of the first Lebesgue theorem

A function f : [a, b] ! R is of bounded variation if V (f) = sup

n1

X

i=1

|f(ti+1) f(ti)| < 1, the sup taken over all collections t1  t2  . . .  tn in [a, b].

Theorem (Brattka, Miller, N; to appear in TAMS) Let f : [0, 1] ! R be non-decreasing and computable. Then z is computably random ) f0(z) exists.

I Under the weaker hypothesis that f has bounded variation,

f0(z) exists for each Martin-L¨

  • f random real z, but not

necessarily for each computably random. (Demuth, 1975; Brattka, Miller,N. ta).

I Some depth here that doesn’t show in classical analysis.

Jordan decomposition f = g0 g1 for nondecreasing gi is not effective!

26/1

slide-27
SLIDE 27

Functions-to-tests

To prove the first result: If f is computable nondecreasing, we (uniformly in f) build a computable martingale M such that f0(z) fails to exist ) M succeeds on z. (I will give detail later when I do the polynomial time computable case.) Corollary Each computable nondecreasing function f is differentiable at a (uniformly obtained) computable real. PROOF: Each computable martingale fails on some computable real, which can be obtained uniformly.

27/1

slide-28
SLIDE 28

Converses (tests-to-functions)

I Both the nondecreasing and the bounded variation cases also have converses: if z is not random in the appropriate sense, then some computable function of the respective type fails to be differentiable at z (BMN, to appear). I So one could take the differentiability properties for classes

  • f effective functions as definitions of randomness notions!

z is computably random , each computable nondecreasing function is differentiable at z z is Martin-L¨

  • f random ,

each computable function of bd. variation differentiable at z.

28/1

slide-29
SLIDE 29

A new proof of Demuth’s result

Here is the proof of Brattka/Miller/N. (TAMS, ta) of the result

  • f Demuth on BV function. We get a stronger form:

Let r be a Martin-L¨

  • f random real. Suppose f is uniformly

computable on the rationals, and f is of bounded variation. Then f0(r) exists. I By Jordan’s result, f = h0 h1 for some nondecreasing functions h0, h1. I One can show that r is Martin-L¨

  • f random (hence

computably random) relative to some oracle set X encoding such a pair h0, h1. I By the previous theorem, relativized to X, the hi are both differentiable at r. Thus f0(r) = h0

0(r) h0 1(r) exists.

29/1

slide-30
SLIDE 30

The strength of the Jordan decomposition theorem

I Note that the pairs h0, h1 with f = h0 h1 (not necessarily continuous) can be seen as a Π0

1 class P.

I We obtain the decomposition because r is random in some member of P (the “low for r” basis theorem). Results by Greenberg, N., Yokoyama, Slaman (see 2013 Logic Blog and upcoming project report by Marcus Triplett) show: I Jordan decomposition of any continuous BV f into continuous functions h0, h1 is equivalent to ACA over RCA. I Jordan decomposition of any continuous BV f into nondecreasing functions h0, h1 is equivalent to WKL over RCA.

30/1

slide-31
SLIDE 31

Randomness notions given by function classes (BMN ta)

31/1

slide-32
SLIDE 32
  • 4. Polynomial time randomness and

differentiability

32/1

slide-33
SLIDE 33

Special Cauchy names

I A Cauchy name is a sequence of rationals (pi)i2N such that

8k > i |pi pk|  2i.

I We represent a real x by a Cauchy name converging to x.

For feasible analysis, we use a compact set of Cauchy names: the signed digit representation of a real. Such Cauchy names, called special, have the form pi =

i

X

k=0

bk2k, where bk 2 {1, 0, 1}. (Also, b0 = 0, b1 = 1.) So they are given by paths through {1, 0, 1}ω, something a resource-bounded Turing machine can process.

33/1

slide-34
SLIDE 34

Polynomial time computable functions

The following has been formulated in equivalent ways by Ker-i-Ko (1989), Weihrauch (2000), Braverman (2008). Definition A function g: [0, 1] ! R is polynomial time computable if there is a polynomial time TM turning every special Cauchy name for x 2 [0, 1] into a special Cauchy name for g(x). This means that the first n symbols of a special Cauchy name for g(x) can be computed in time polynomially in n, thereby using polynomially many symbols of the oracle tape that holds a special Cauchy name for x.

34/1

slide-35
SLIDE 35

Examples of polynomial time computable functions

I Functions such as ex, sin x are polynomial time computable.

I To see this one uses rapidly a converging approximation sequence, such as ex = P

n xn/n!.

I As Braverman (2008) points out, ex is computable in time O(n3). I Namely, from O(n3) symbols of x we can in time O(n3) compute an approximation of ex with error  2n. I Better algorithms may exist (e.g. search the 1987 book J. Borwein and P. Borwein, Pi and the AGM).

I Breutzman, Juedes and Lutz (MLQ, 2004) have given an example of a polynomial time computable function that is no-where differentiable. It is a variant of the Weierstrass function P

n 2n cos(5nπx).

35/1

slide-36
SLIDE 36

Polynomial time randomness

Recall that a betting strategy, or martingale, is a function M : 2<ω ! R+

0 such that

M(σ0) + M(σ1) = 2M(σ) for each string σ. Definition A betting strategy M : 2<ω ! R is called polynomial time computable if from a string σ and an i 2 N we can, in time polynomial in |σ| + i, compute the i-th component

  • f a special Cauchy name for M(σ).

In this case we can compute a polynomial time martingale in base 2 dominating M (Schnorr / Figueira-N). Definition We say Z 2 2N is polynomial time random if no polynomial time betting strategy succeeds on Z.

36/1

slide-37
SLIDE 37

Polynomial time randomness

Definition We say Z 2 2N is polynomial time random if no polynomial time betting strategy succeeds on Z.

I This was first studied in Yongge Wang’s 1992 thesis

(Uni Heidelberg).

I Figueira, N 2013 showed that the notion is base invariant:

it is about reals rather than sequences of digits for a fixed base (such as 2). Proposition (Existence in super polynomial time classes) Suppose the function t(n) is time constructible and dominates all polynomials. There is polynomial random Z computable in time O(t(n)) (i.e. the language consisting of the initial segments

  • f Z is O(t)-computable).

37/1

slide-38
SLIDE 38

Lebesgue’s Thm (A) and its converse in the polytime setting

Theorem (N., STACS 2014) The following are equivalent for a real z 2 [0, 1]. (I) z (written in binary expansion) is polynomial time random (II) f0(z) exists, for each non-decreasing function f that is polynomial time computable. I The same method works for primitive recursive randomness/functions, and even computable randomness/computable functions. I So this also yields new proof of Brattka/Miller/Nies.

38/1

slide-39
SLIDE 39

Proof of the easy direction (II) ! (I)

Suppose that f0(z) exists, for each non-decreasing function f that is polynomial time computable. We want to show that z is polynomial time random.

Let Sg(σ) denote the slope of a non-decreasing function g at the basic dyadic interval given by string σ. This is a betting strategy. Essentially, each betting strategy M is of the form Sg for nondecreasing g. If M is polynomial time then so is g. Since g0(z) exists, M is bounded along z.

Slope = M(10) S l

  • p

e = M ( 1 1 )

Slope = M(1)

0.10 0.11 1.0

g

1 39/1

slide-40
SLIDE 40

Slopes and their limits

For a function f : ✓ R ! R, for a pair a, b of distinct reals let Sf(a, b) = f(a) f(b) a b . For f defined on the rationals, the lower and upper (pseudo-)derivatives are D e f(x) = lim inf

h!0+ {Sf(a, b) | a  x  b ^ 0 < b a  h},

e Df(x) = lim sup

h!0+ {Sf(a, b) | a  x  b ^ 0 < b a  h}.

where a, b range over rationals in [0, 1]. Example: f(x) = x sin(1/x). D e f(0) = 1, e Df(0) = 1. For f defined in a nbhd of x and continuous at x, f0(x) exists iff D e f(x) = e Df(x) < 1.

40/1

slide-41
SLIDE 41

Slopes at basic dyadic intervals

The subscript 2 indicates restriction to basic dyadic intervals [σ], where σ is a string, containing z: e D2f(x) = lim sup

|σ|!1

{Sf(σ) | x 2 [σ]}. Recall: if f is non-decreasing then M(σ) = Sf(σ) is a betting

  • strategy. We say that M converges on z if limn M(Z n) exists.

We have the following basic connections: I M succeeds on z , e D2f(z) = 1. I M converges on z , D e

2f(z) = e

D2f(z) < 1

41/1

slide-42
SLIDE 42

Proof of the harder direction (I) ! (II)

Now suppose that z = 0.Z 2 [0, 1] is polynomial time random. We want to show that f0(z) exists, for each non-decreasing function f that is polynomial time computable. I Consider the polynomial time computable betting strategy M(σ) = Sf(σ) . I limn M(Z n) exists and is finite for each polynomially random Z. This is an efficient version of Doob’s martingale convergence theorem. I Therefore D e

2f(z) = e

D2f(z) < 1.

42/1

slide-43
SLIDE 43

Porosity

Assume for a contradiction that f0(z) fails to exist. We have

  • scillation of slopes of f at arbitrarily small intervals around z.

We want success of a betting strategy at basic dyadic intervals corresponding to prefixes of Z. I First suppose that e D2f(z) < p < e Df(z). I Since e D2f(z) < p there is a string σ⇤ Z such that 8σ [Z σ ⌫ σ⇤ ) Sf(σ)  p]. I Choose k with p(1 + 2k+1) < e Df(z). Let denote the prefix relation of strings. The next lemma says that [σ⇤] S{(σ): σ ⌫ σ⇤ ^ Sf(σ) > p} is porous at z. Lemma (High slopes at dyadic intervals) There are arbitrarily large n such that Sf(τn) > p for some basic dyadic interval [τn] of length 2nk which is contained in [z 2n+2, z + 2n+2].

43/1

slide-44
SLIDE 44

We may suppose σ⇤ is the empty string, i.e., Sf(σ)  p for all dyadic intervals [σ] containing z. By the lemma, there are arbitrarily large n such that Sf(τn) > p for some basic dyadic interval [τn] of length 2nk which is contained in [z 2n+2, z + 2n+2]. Good case: there are infinitely many n with η = Z n4 τn. Then the strategy that from such η on bets everything on the strings of length n + k other than τn gains a fixed factor 2k+4/(2k+4 1) on Z each time. Also, it never goes down on Z, so it succeeds. Bad case: for almost all n we have Z n46 τn. This means 0.τn is on the left side of z. So the strategy can’t use τn as it splits off from Z before η is read.

44/1

slide-45
SLIDE 45

The shifting–by–1/3 trick

Fix m 2 N. For k 2 Z consider an interval I = [k2m, (k + 1)2m]. For r 2 Z consider an interval J = 1/3 + [r2m, (r + 1)2m]. The distance between an endpoint of I and an endpoint of J is at least 1/(3 · 2m).

To see this: assume that k2m (p2m + 1/3) < 1/(3 · 2m). This yields (3k 3p 2m)/(3 · 2m) < 1/(3 · 2m), and hence 3|2m, a contradiction.

45/1

slide-46
SLIDE 46

Using this trick to finish the proof of (I) ! (II)

We may assume that z > 1/2. In the “bad” case that Z n46 τn for almost all τn, we instead bet on the dyadic expansion Y of z 1/3. I Given η0 = Y n4, look for an extension τ 0 η0 of length n + k + 1, such that 1/3 + [τ 0] ✓ [τ] for a string [τ] with Sf(τ) > p. (Then Y 62 [τ 0].) I If it is found, bet everything on the other extensions of η0

  • f that length n + k + 1.

This strategy gains a fixed factor 2k+5/(2k+5 1) on Y each time n is as above. It never goes down on Y , so it succeeds. So we get a polytime martingale that wins on z 1/3. By Figueira and N (2013), polytime randomness is base invariant, so z 1/3 is polynomially random. This yields a contradiction. The case D e f(z) < D e

2f(z) is analogous, using a “low dyadic

slopes” lemma instead.

46/1

slide-47
SLIDE 47

Shifted dyadic versus full differentiability

For a rational q let Dq be the collection of intervals of the form q + [k2m, (k + 1)2m] where k 2 Z, m 2 N. Question Let f : [0, 1] ! R be continuous nondecreasing, and let z 2 (0, 1). Suppose that for each rational q, lim

[a,b]2Dq, z2[a,b], ba!0 Sf(a, b)

  • exists. Is f already differentiable at z?

47/1

slide-48
SLIDE 48
  • 5. Differentiability of Lipschitz functions

48/1

slide-49
SLIDE 49

Computable randomness and Lipschitz functions

Recall that f is Lipschitz if |f(x) f(y|  C(|x y|) for some C 2 N. Theorem (Freer, Kjos, N, Stephan, Computability, 2014) A real z is computably random ( ) each computable Lipschitz function f : [0, 1] ! R is differentiable at z. = ) : Write f(x) = (f(x) + Cx) Cx. Then f(x) + Cx is computable and non-decreasing. From the monotone case (BMN), we obtain a test (martingale) for this function. If f0(z) does not exists, then z fails this test. ( = : Turn success of a martingale on a real into oscillation of the slopes, around the real, of a Lipschitz function.

49/1

slide-50
SLIDE 50

Rademacher’s theorem

Theorem (Rademacher, 1920) Let f : [0, 1]n ! R be Lipschitz. Then the derivative Df(z) (an element of Rn) exists for almost every vector z 2 [0, 1]n. To define computable randomness of a vector z 2 [0, 1]n: I Take the binary expansion of the n components of z. I We can bet on the corresponding sequence of blocks of n bits. Rute (2012) studies this notion, for instance invariance under computable measure preserving operators.

50/1

slide-51
SLIDE 51

Effective form of Rademacher

Theorem (Galicki and Turetsky, arxiv.org/abs/1410.8578) z 2 [0, 1]n is computably random ) every computable Lipschitz function f : [0, 1]n ! R is differentiable at z. For a vector v, the directional derivative Df (z; v) is the derivative of the function t ! f(z + tv) at 0. The proof has three steps: I all partial derivatives exist at z I all directional derivatives for computable directions exist I Df (z; v) is linear on computable directions v Since f is Lipschitz this show that f Gˆ ateaux-differentiable at z: all directional derivatives exist, and the value is linear in the direction. Again since f is Lipschitz, this yields the full differentiability of f at z. ⇤

51/1

slide-52
SLIDE 52

Other approaches to effective Lipschitz functions

The polynomial time case is open. Question Suppose z 2 [0, 1]n is polynomially random. Is every polynomial time Lipschitz function f : [0, 1]n ! R is differentiable at z? I Abbas Edalat has developed an approach to differentiability of effective Lipschitz functions using domain theory. See his recent paper in TCS. I It involves the Clarke gradient (a set-valued derivative) to get around the measure 0 set where the function is not classically differentiable.

52/1

slide-53
SLIDE 53
  • 6. Two more almost everywhere results:

Carleson-Hunt (1966/68) and Weyl (1916)

53/1

slide-54
SLIDE 54

Carleson-Hunt Thm (suggested by Manfred Sauter)

Theorem (Carleson, 1966 for p = 2; improved by Hunt 1968) Let f 2 Lp[π, π] be a periodic function. Then the Fourier series cN(z) = P

|n|N ˆ

f(n)einz converges for almost every z. Question Suppose f is Lp-computable for computable p > 1. Which randomness property of z suffices to make the sequence cN(z) converge? I We say z is weakly 2-random if z is in no null Π0

2 set. This

properly implies Martin-L¨

  • f randomness.

I As an easy consequence of Carleson-Hunt theorem, weak 2-randomness of z suffices. For fixed rationals α < β, the statement that, say, Re cN(z) oscillates between values < α and > β is Π0

2.

54/1

slide-55
SLIDE 55

Effective Weyl Theorem

Theorem (Weyl, 1916) Let (ai)i2N be a sequence of distinct integers. Then for almost every real z, the sequence aiz mod 1 is uniformly distributed in [0, 1]. Suppose now (ai)i2N is computable. Avigad (2012) shows that I Schnorr randomness of z suffices to make the conclusion of Weyl’s theorem hold. I There is a z satisfying the conclusion of the theorem which is in some null effectively closed set (hence not even “Kurtz random”).

55/1

slide-56
SLIDE 56
  • 7. Effective ergodic theory: multiple recurrence

56/1

slide-57
SLIDE 57

Classical theory

A measurable operator T on a probability space (X, B, µ) is measure preserving if µT 1(A) = µA for each A 2 B.

The following is Furstenberg’s multiple recurrence theorem (1977); see Furstenberg’s book on recurrence, 2014 edition, Thm. 7.15.

Theorem Let (X, B, µ) be a probability space. Let T1, . . . , Tk be commuting measure preserving operators on X. For each P 2 B with µP > 0, there is n > 0 such that µ(T

i T n i

(P)) > 0. With a little measure theory one can easily strengthen this to an “almost-everywhere” type result: a.e. z 2 P 9n [z 2 T

i T n i

(P)].

57/1

slide-58
SLIDE 58

k-recurrence in Cantor space

Let X = 2N with the shift operator S : X ! X that takes the first bit off a sequence. Definition Let P ✓ 2N be measurable, and Z 2 2N. We say that Z is k-recurrent in P if Sn(Z), S2n(Z), . . . , Skn(Z) 2 P for some n 1, i.e. Z 2 T

1ik Sni(P).

Theorem (Downey, Nandakumar, N., in preparation) Let P ✓ 2N be a Π0

1 class of positive measure.

Each Martin-L¨

  • f random Z is k-recurrent in P, for each k 1.

Martin-L¨

  • f-randomness is necessary even for k = 1. If Z is not

ML-random, no “tail” Sn(Z) is in the Π0

1 class of positive measure

P = {Y : 8r K(Y r) r 1} by the Levin-Schnorr Theorem.

58/1

slide-59
SLIDE 59

General Conjecture

It is likely that an effective multiple recurrence theorem holds in full generality for ML-randomness and Π0

1 sets.

Conjecture Let (X, µ) be a computable probability space. Let T1, . . . , Tn be computable measure preserving transformation that commute

  • pairwise. Let P be a Π0

1 class with µP > 0.

If Z 2 P is ML-random then 9n V

i T n i z 2 P.

I By the classical of Furstenberg, this holds for weakly 2-random Z (i.e., in no null Π0

2 class).

I Jason Rute has pointed out that if µP is computable, then Schnorr randomness of Z is sufficient, also by the classical result. A draft of this work is available on the 2015 Logic Blog.

59/1

slide-60
SLIDE 60

Randomness and analysis: a tutorial

Part II: Lebesgue density and its applications to randomness

Andr´ e Nies

CCC 2015, Kochel am See

1/24

slide-61
SLIDE 61

Density

Let denote uniform (Lebesgue) measure. Definition Let E be a subset of [0, 1]. The (lower) density of E at a real z is ⇢(E | z) = lim inf

z2J , |J|!0

(J \ E) |J| , where J ranges over intervals. This gauges how much, at least, of E is in intervals that zoom in on z. ⇢(E | z) is the limit over intervals containing z. Clearly ⇢(E | z) = 1 $ ⇢(E | z) = 1.

2/24

slide-62
SLIDE 62

Lebesgue’s Theorem: towards an effective version

Recall: ⇢(E | z) = lim infJ interval,z2J,|J|!0 (J \ E)/|J|. Theorem (Lebesgue Density Theorem, 1910) Let E ✓ [0, 1] be measurable. Then for almost every z 2 [0, 1]: if z 2 E, then ⇢(E | z) = 1. I For open E, this is immediate, and actually holds for all z 2 [0, 1]. I For closed E, this is the simplest case where there is something to prove. E ✓ [0, 1] is effectively closed (or Π0

1) if there is an effective list

  • f open intervals with rational endpoints that has union

[0, 1] \ E. Definition (Main) We say that a real z is a density-one point if ⇢(E | z) = 1 for every effectively closed E 3 z.

3/24

slide-63
SLIDE 63

Martin-L¨

  • f randomness and density

Does Martin-L¨

  • f randomness ensure that an effectively closed

E ✓ [0, 1] with z 2 E has density one at z? Answer: NO! Example I Let E 6= ;, E ✓ [0, 1] be an effectively closed set containing

  • nly Martin-L¨
  • f randoms.

I E.g., E = [0, 1] \ S1 where hSrir2N is a universal ML-test. I Let z = min(E). I Then ⇢(E | z) = 0 even though z is ML-random.

4/24

slide-64
SLIDE 64

Density randomness

Definition Let E be a measurable subset of 2N. The lower dyadic density

  • f E at Z 2 2N is

⇢2(E | Z) = lim inf

n!1 2n([Z n] \ E).

Definition We say that Z ✓ N is density random if Z is ML-random and ⇢2(P | Z) = 1 for each Π0

1 class P 2 Z.

For ML-random Z, one can equivalently require the full density equals 1 in the setting of reals, by a result of Khan and Miller (2013).

5/24

slide-65
SLIDE 65

Three characterisations of density randomness

Theorem The following are equivalent for Z 2 2N, z = 0.Z. I Z is density random I [Madison group, 2012] Each left-c.e. martingale M converges: limn M(Z n) exists (M is left-c.e. if M() is a left-c.e. real uniformly in string ) I [N., 2014] g0(z) exists for each interval-c.e. function g I [Miyabe, N., Zhang 2013] For each integrable lower semicomputable function f : [0, 1] ! R

+, the “averaging”

statement of the Lebesgue differentiation theorem holds at z. For background and complete proofs see Miyabe, N., Zhang

  • 2013. The continuous interval-c.e. functions with g(0) = 0 are

precisely the variation functions of computable functions by Freer et al. 2014.

6/24

slide-66
SLIDE 66
  • 2. Anti-random sequences

000000000000000000000000000000000000000000000000000000 000000000000000000000000000000000000000000000000000000 000000000000000000000001000000000000000001111111111111 111111111111111111111111111111111111111111111100000000 000000000000000000111000000000000000000000000000000000 . . .

7/24

slide-67
SLIDE 67

Basic objects of computability theory

8/24

slide-68
SLIDE 68

Basic objects of computability theory

Computable sets

I The computable subsets

  • f N

8/24

slide-69
SLIDE 69

Basic objects of computability theory

Computable sets

∅' I The computable subsets

  • f N

I the halting problem ;0

8/24

slide-70
SLIDE 70

Basic objects of computability theory

Computable sets

∅'

!!!sets

∆0

2

I The computable subsets

  • f N

I the halting problem ;0 I Turing reducibility T I the ∆0

2 sets (A T ;0)

8/24

slide-71
SLIDE 71

Basic objects of computability theory

C.e. sets Computable sets

∅' I The computable subsets

  • f N

I the halting problem ;0 I Turing reducibility T I the computably enumerable sets

8/24

slide-72
SLIDE 72

Adding the world of (anti-)randomness

Computable sets

9/24

slide-73
SLIDE 73

Adding the world of (anti-)randomness

Computable sets

Z

I The Martin-L¨

  • f random

sets Z, such as Chaitin’s halting probability Ω. We have Ω ⌘T ;0.

9/24

slide-74
SLIDE 74

Adding the world of (anti-)randomness Ω

K-trivial sets

I The antirandom (K-trivial) sets.

9/24

slide-75
SLIDE 75

Adding the world of (anti-)randomness

K-trivial sets

A

D (c.e.)

I The antirandom (K-trivial) sets. If A is K-trivial, then there is c.e. K-trivial set D tt A. (N. 2005)

9/24

slide-76
SLIDE 76

Kuˇ cera’s theorem and the covering problem

Computable sets

10/24

slide-77
SLIDE 77

Kuˇ cera’s theorem and the covering problem

C.e. sets

Computable sets

Z A

Let Z be a random ∆0

2 set. Then

there is a c.e., incomputable set A T Z. (Kuˇ cera, 1986)

10/24

slide-78
SLIDE 78

Kuˇ cera’s theorem and the covering problem

C.e. sets

A

K-trivial sets

Z

Let Z be random with Z 6T ;0. Let A T Z be c.e. Then A is K-trivial. (Hirschfeldt, N., Stephan, 2007)

10/24

slide-79
SLIDE 79

Kuˇ cera’s theorem and the covering problem

C.e. sets

A

K-trivial sets

Let Z be random with Z 6T ;0. Let A T Z be c.e. Then A is K-trivial. (Hirschfeldt, N., Stephan, 2007) Covering problem (Stephan, 2004) Let A be a c.e. K-trivial set.

10/24

slide-80
SLIDE 80

Kuˇ cera’s theorem and the covering problem

C.e. sets

Z A

K-trivial sets

Let Z be random with Z 6T ;0. Let A T Z be c.e. Then A is K-trivial. (Hirschfeldt, N., Stephan, 2007) Covering problem (Stephan, 2004) Let A be a c.e. K-trivial set. Is there a ML-random Z T A with Z 6T ;0?

10/24

slide-81
SLIDE 81

Kuˇ cera’s theorem and the covering problem

C.e. sets

Z A

Let Z be random with Z 6T ;0. Let A T Z be c.e. Then A is K-trivial. (Hirschfeldt, N., Stephan, 2007) Covering problem (Stephan, 2004) Let A be a c.e. K-trivial set. Is there a ML-random Z T A with Z 6T ;0? We may omit the assumption that A is c.e.: if not, replace A by a c.e. K-trivial set D above A.

10/24

slide-82
SLIDE 82

A strong solution to the covering problem

Z

K-trivial sets

Theorem 5 + 2 authors There is a ML-random set Z <T ;0 above all the K-trivials.

11/24

slide-83
SLIDE 83

A strong solution to the covering problem

Z

K-trivial sets

Theorem 5 + 2 authors There is a ML-random set Z <T ;0 above all the K-trivials. How random can Z be?

11/24

slide-84
SLIDE 84

A strong solution to the covering problem

Z

K-trivial sets

Theorem 5 + 2 authors There is a ML-random set Z <T ;0 above all the K-trivials. How random can Z be? Answer: not much more than Martin-L¨

  • f random.

11/24

slide-85
SLIDE 85

A strong solution to the covering problem

Z

K-trivial sets

Theorem 5 + 2 authors There is a ML-random set Z <T ;0 above all the K-trivials. How random can Z be? Answer: not much more than Martin-L¨

  • f random.

How close to ;0 must Z lie?

11/24

slide-86
SLIDE 86

A strong solution to the covering problem

Z

K-trivial sets

Theorem 5 + 2 authors There is a ML-random set Z <T ;0 above all the K-trivials. How random can Z be? Answer: not much more than Martin-L¨

  • f random.

How close to ;0 must Z lie? Answer: Z is very close to ;0.

11/24

slide-87
SLIDE 87

Background on antirandom sets

12/24

slide-88
SLIDE 88

Descriptive string complexity K

Consider a partial computable function from binary strings to binary strings (called machine). It is called prefix-free if its domain is an antichain under the prefix relation of strings. There is a universal prefix-free machine U: for every prefix-free machine M, M() = y implies U(⌧) = y for some ⌧ with |⌧|  || + dM, and the constant dM only depends on M. The prefix-free Kolmogorov complexity of string y is the length

  • f a shortest U-description of y:

K(y) = min{||: U() = y}.

13/24

slide-89
SLIDE 89

Definition of K-triviality

In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧, up to additive const we have K(|⌧|)  K(⌧), since we can compute |⌧| from ⌧.

14/24

slide-90
SLIDE 90

Definition of K-triviality

In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧, up to additive const we have K(|⌧|)  K(⌧), since we can compute |⌧| from ⌧. Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K-trivial if, for some b 2 N, 8n [K(An)  K(n) + b], namely, all its initial segments have minimal K-complexity.

14/24

slide-91
SLIDE 91

Definition of K-triviality

In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧, up to additive const we have K(|⌧|)  K(⌧), since we can compute |⌧| from ⌧. Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K-trivial if, for some b 2 N, 8n [K(An)  K(n) + b], namely, all its initial segments have minimal K-complexity. It is not hard to see that K(n)  2 log2 n + O(1).

14/24

slide-92
SLIDE 92

Definition of K-triviality

In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧, up to additive const we have K(|⌧|)  K(⌧), since we can compute |⌧| from ⌧. Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K-trivial if, for some b 2 N, 8n [K(An)  K(n) + b], namely, all its initial segments have minimal K-complexity. It is not hard to see that K(n)  2 log2 n + O(1). Z is random , 8n [K(Z n) > n O(1)]

14/24

slide-93
SLIDE 93

Definition of K-triviality

In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧, up to additive const we have K(|⌧|)  K(⌧), since we can compute |⌧| from ⌧. Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K-trivial if, for some b 2 N, 8n [K(An)  K(n) + b], namely, all its initial segments have minimal K-complexity. It is not hard to see that K(n)  2 log2 n + O(1). A is K-trivial , 8n [K(An)  K(n) +O(1)]

14/24

slide-94
SLIDE 94

Definition of K-triviality

In the following, we identify a natural number n with its binary representation (as a string). For a string ⌧, up to additive const we have K(|⌧|)  K(⌧), since we can compute |⌧| from ⌧. Definition (going back to Chaitin, 1975) An infinite sequence of bits A is K-trivial if, for some b 2 N, 8n [K(An)  K(n) + b], namely, all its initial segments have minimal K-complexity. It is not hard to see that K(n)  2 log2 n + O(1). Z is random , 8n [K(Z n) > n O(1)] A is K-trivial , 8n [K(An)  K(n) +O(1)] Thus, being K-trivial means being far from random.

14/24

slide-95
SLIDE 95

Connecting density and K-triviality

This is based on the following work:

[Oberwolfach] Bienvenu, Greenberg, Kuˇ cera, N. Turetsky 2012 JEMS, to appear [Berkeley] Day and Miller 2012

  • Math. Research Letters, to appear

[Paris] Bienvenu, Miller, H¨

  • lzl and N.

2011 STACS 2012, JML 2014

15/24

slide-96
SLIDE 96

Turing incompleteness and positive density

Definition We say that a real z is a positive density point if ⇢(E | z) > 0 for every effectively closed E 3 z. For a real z 62 Q, let Z 2 2N denote its binary expansion: z = 0.Z. Theorem (Paris) Let z be a Martin-L¨

  • f random real. Then

Z is NOT above the halting problem ;0 , z is a positive density point.

16/24

slide-97
SLIDE 97

The main connection of density and K-trivials

Recall: ⇢(E | z) = lim inf|J|→0,z∈J (J \ E)/|J|. Definition (Recall) We say that a real z is a density-one point if ⇢(E | z) = 1 for every effectively closed E 3 z. In other words, z satisfies the Lebesgue Theorem for effectively closed sets.

Z

K-trivial sets

Theorem (Oberwolfach) Let z be a Martin-L¨

  • f random real.

Suppose z is NOT a density-one point. Then Z is above all the K-trivials.

17/24

slide-98
SLIDE 98

The main connection of density and K-trivials

Z

K-trivial sets

Theorem Let z be a Martin-L¨

  • f random real.

Suppose z is not a density-one point. Then Z is above all the K-trivials.

18/24

slide-99
SLIDE 99

The main connection of density and K-trivials

Z

K-trivial sets

Theorem Let z be a Martin-L¨

  • f random real.

Suppose z is not a density-one point. Then Z is above all the K-trivials. To solve the covering problem, we need to know: Does Z as in the picture exist?

18/24

slide-100
SLIDE 100

The main connection of density and K-trivials

Z

K-trivial sets

Theorem Let z be a Martin-L¨

  • f random real.

Suppose z is not a density-one point. Then Z is above all the K-trivials. To solve the covering problem, we need to know: Does Z as in the picture exist? Where do we get a ML-random set Z 6T ;0 that is not a density-one point?

18/24

slide-101
SLIDE 101

Why such a Z exists

Theorem (Berkeley, i.e., Day and Miller) Let P be a nonempty Π0

1 class of ML-randoms. There is a ML-

random set Z 6T ;0 such that ⇢2(P | Z)  1/2. [Paris] characterized difference randomness of a ML-random Z via positive density: Z 6T ;0 iff Z is a positive density point. Berkeley built a set Z that is a positive density point. Note that, for the Day-Miller set Z, the local measure σ(Z) for Z oscillates between 1 (asymptotically), and a value ✏ with 0 < ✏  1/2.

19/24

slide-102
SLIDE 102

Why such a Z exists

Theorem (Berkeley) Let P be a nonempty Π0

1 class of ML-randoms. There is a ML-

random set Z 6T ;0 such that ⇢2(P | Z)  1/2.

  • Proof. Force with conditions of the form h, Qi, where

I is a string, Q ✓ P, [] \ Q 6= ; I there is < 1/2 such that each string ⌧ ⌫ has two options:

either [⌧] \ Q = ;, or τ(Q) τ(P) . (Q either loses all, or  of P’s local measure within [⌧].) h0, Q0i extends h, Qi if 0 ⌫ and Q0 ✓ Q. We have an initial condition h;, Pi (via = 0). If G is a sufficiently generic filter, then ZG = S{: h, Qi 2 G} is a ML-random positive density point, and ⇢2(P | Z)  1/2. Then by Bienvenu et al., Z 6T ;0.

20/24

slide-103
SLIDE 103

The strongest answer to the covering question

Berkeley’s careful effectvization of the forcing yields a ∆0

2 set Z.

21/24

slide-104
SLIDE 104

The strongest answer to the covering question

Berkeley’s careful effectvization of the forcing yields a ∆0

2 set Z.

Z

K-trivial sets

Theorem (Oberwolfach + Berkeley) There is a ML-random set Z <T ;0 above all the K-trivials.

21/24

slide-105
SLIDE 105

Questions on density randomness

Question (Turetsky) Is density randomness closed downward within the ML-randoms?

This is known for most randomness notions stronger than Martin-L¨

  • f’s, including for OW-randomness (by the results above).

Question (Franklin) Is density randomness equivalent to being a Birkhoff point for each computable measure preserving operator and semicomputable function?

22/24

slide-106
SLIDE 106

Book references for background

My book “Computability and Randomness”, Oxford University Press, 447 pages, Feb. 2009; Paperback version Mar. 2012.

  • Rodney G. Downey · Denis R. Hirschfeldt

Algorithmic Randomness and Complexity

Book by Downey and Hirschfeldt: “Algorithmic Randomness and Complexity”, Springer, > 800 pages, Dec. 2010;

23/24

slide-107
SLIDE 107

Paper and preprint references for Part II

[Everyone] Bienvenu, Day, Greenberg, Kuˇ cera, Miller, N., Turetsky Computing K-trivial sets by incomplete random sets Bulletin of Symbolic Logic, 20, March 2014, pp 80-90. [Oberwolfach] Bienvenu, Greenberg, Kuˇ cera, N., Turetsky Coherent randomness tests and computing the K-trivial sets.. JEMS, to appear 2016. [Berkeley] Day and Miller Density, forcing and the covering problem MRL, to appear. [Paris] Bienvenu, Miller, H¨

  • lzl and N.

The Denjoy alternative for computable functions STACS 2012, 543 - 554. Demuth, Denjoy, and Density .

  • J. Math. Logic 1 (2014) 1450004 (35 pages)

24/24

slide-108
SLIDE 108

SOME EXERCISES FOR THE CCC 2015 TUTORIAL

ANDR´ E NIES

(1) Functions are defined on [1, 1]. Show that the function f(x) = x2 sin(1/x), f(0) = 0 is of bounded variation, and the function g(x) = x sin(1/x), g(0) = 0 is not of bounded variation. (2) Let Z be a bit sequence with only finitely many blocks 000 in it. Show that Z is not polynomial time random. Solution: The following polynomial time betting strategy succeeds on Z. Bet 10% on 1 and restart if the bit was 1. Else, bet 20% on 1 and restart if the bit was 1. Else, bet 50% on 1. Restart. (3) Show that no computable bit sequence Z is computably random. Solution: Let L be the betting strategy that at stage n bets everything on Z(n). Then L succeeds on Z. (4) Let hGe

mim2N be an effective list of all Martin-L¨

  • f tests. Use this list to build a universal

Martin-L¨

  • f test hSrir2N: Z is ML-random $ Z 62 T

r Sr.

Solution: Sr = S

e<r Ge e+r+1. We have λSr  P e<r 2(r+e+1)  2r.

(5) Consider the computable increasing function f(x) = x sin(2π log2 x) + 10x. Show that D e

2f(0) = e

D2f(0) = 10, but 9 = Df(0) < Df(0) = 11. (6) Let E 6= ;, E ✓ ω2 be an effectively closed set containing only Martin-L¨

  • f randoms. Let

Z = min E (in the lex ordering). Show that ρ2(E | Z) = 0. (7) Show that each computable set is K-trivial. Show that ;0 is not K-trivial.

1