Fractal Intersections and Products via Algorithmic Dimension Neil - - PowerPoint PPT Presentation

fractal intersections and products via algorithmic
SMART_READER_LITE
LIVE PREVIEW

Fractal Intersections and Products via Algorithmic Dimension Neil - - PowerPoint PPT Presentation

Fractal Intersections and Products via Algorithmic Dimension Neil Lutz Rutgers University June 26, 2017 Goal: Use algorithmic information theory to answer fundamental questions in fractal geometry. Agenda: Discuss classical and


slide-1
SLIDE 1

Fractal Intersections and Products via Algorithmic Dimension

Neil Lutz Rutgers University June 26, 2017

slide-2
SLIDE 2

Goal:

Use algorithmic information theory to answer fundamental questions in fractal geometry.

Agenda:

◮ Discuss classical and algorithmic notions of dimension. ◮ Describe a recent point-to-set principle that relates them. ◮ Describe a notion of conditional dimension. ◮ Apply these new tools bound the classical dimension of

products and slices of fractals.

◮ Special case of intersections — one of the sets is a vertical line.

slide-3
SLIDE 3

What is dimension?

Informally, it’s the number of free parameters: The number of parameters needed to specify an arbitrary element inside a set given a description for the set.

slide-4
SLIDE 4

What is dimension?

Informally, it’s the number of free parameters: The number of parameters needed to specify an arbitrary element inside a set given a description for the set. 2

slide-5
SLIDE 5

What is dimension?

Informally, it’s the number of free parameters: The number of parameters needed to specify an arbitrary element inside a set given a description for the set. 2 1

slide-6
SLIDE 6

What is dimension?

Informally, it’s the number of free parameters: The number of parameters needed to specify an arbitrary element inside a set given a description for the set. 2 1 ???

slide-7
SLIDE 7

What is dimension?

Informally, it’s the number of free parameters: The number of parameters needed to specify an arbitrary element inside a set given a description for the set. 2 1 ??? We want a way to quantitatively classify sets of measure zero.

slide-8
SLIDE 8

What is dimension?

Informally, it’s the number of free parameters: The number of parameters needed to specify an arbitrary element inside a set given a description for the set. 2 1 ??? We want a way to quantitatively classify sets of measure zero. Example: Suppose an algorithm succeeds with probability 1 but fails in the worst case. How much control does an adversary need to have over the environment to ensure failure?

slide-9
SLIDE 9

Fractal Dimension: Measure Theoretic Approach

How strongly does granularity affect measurement of the set?

Image credit: Alexis Monnerot-Dumaine

Let Nε = number of boxes with side ε needed to cover the set.

slide-10
SLIDE 10

Fractal Dimension: Measure Theoretic Approach

How strongly does granularity affect measurement of the set?

Image credit: Alexis Monnerot-Dumaine

Let Nε = number of boxes with side ε needed to cover the set. Consider lim

ε→0 Nε · εs.

slide-11
SLIDE 11

Fractal Dimension: Measure Theoretic Approach

How strongly does granularity affect measurement of the set?

Image credit: Alexis Monnerot-Dumaine

Let Nε = number of boxes with side ε needed to cover the set. Consider lim

ε→0 Nε · εs.

Infinite for s = 1 (infinite length) and 0 for s = 2 (0 area).

slide-12
SLIDE 12

Fractal Dimension: Measure Theoretic Approach

How strongly does granularity affect measurement of the set?

Image credit: Alexis Monnerot-Dumaine

Let Nε = number of boxes with side ε needed to cover the set. Consider lim

ε→0 Nε · εs.

Infinite for s = 1 (infinite length) and 0 for s = 2 (0 area). In fact, the limit is positive and finite for at most one value of s.

slide-13
SLIDE 13

Hausdorff Dimension

The most standard, robust notion of fractal dimension.

slide-14
SLIDE 14

Hausdorff Dimension

The most standard, robust notion of fractal dimension. Hs(E) = s-dimensional Hausdorff measure of a set E ⊆ Rn. (Generalizes integer-dimensional Lebesgue outer measure)

slide-15
SLIDE 15

Hausdorff Dimension

The most standard, robust notion of fractal dimension. Hs(E) = s-dimensional Hausdorff measure of a set E ⊆ Rn. (Generalizes integer-dimensional Lebesgue outer measure) Hausdorff 1919: The Hausdorff dimension of E is dimH(E) = inf{s : Hs(E) = 0} . ∞ Hs∗(E) ∈ [0, ∞]. Hs(E) s s∗

slide-16
SLIDE 16

Hausdorff Dimension

The most standard, robust notion of fractal dimension. Hs(E) = s-dimensional Hausdorff measure of a set E ⊆ Rn. (Generalizes integer-dimensional Lebesgue outer measure) Hausdorff 1919: The Hausdorff dimension of E is dimH(E) = inf{s : Hs(E) = 0} . ∞ Hs∗(E) ∈ [0, ∞]. Hs(E) s s∗ It is often difficult to prove lower bounds on dimH(E).

slide-17
SLIDE 17

Example: Dimension of the Sierpinski triangle

Convenient fact: This set has Hausdorff dimension equal to its box-counting dimension. Nε = θ(ε− log 3)

slide-18
SLIDE 18

Example: Dimension of the Sierpinski triangle

Convenient fact: This set has Hausdorff dimension equal to its box-counting dimension. Nε = θ(ε− log 3) lim

ε→0 Nε · εs can only be positive and finite for s = log 3,

so the Sierpinski triangle has Hausdorff dimension log 3 ≈ 1.585.

slide-19
SLIDE 19

Example: Dimension of the Sierpinski triangle

Convenient fact: This set has Hausdorff dimension equal to its box-counting dimension. Nε = θ(ε− log 3) lim

ε→0 Nε · εs can only be positive and finite for s = log 3,

so the Sierpinski triangle has Hausdorff dimension log 3 ≈ 1.585. In what sense is this the number of free parameters?

slide-20
SLIDE 20

Example: Dimension of the Sierpinski triangle

00 01 10 11

slide-21
SLIDE 21

Example: Dimension of the Sierpinski triangle

00 01 10 11

slide-22
SLIDE 22

Example: Dimension of the Sierpinski triangle

00 01 10 11

01 11 00 11 01 11

slide-23
SLIDE 23

Example: Dimension of the Sierpinski triangle

00 01 10 11

01 11 00 11 01 11 We can think of the first bit and second bit at each recursion level as two parameters. 2r bits approximate a point within ≈ 2−r error.

slide-24
SLIDE 24

Example: Dimension of the Sierpinski triangle

00 01 10 11

01 11 00 11 01 11 We can think of the first bit and second bit at each recursion level as two parameters. 2r bits approximate a point within ≈ 2−r error. But for points within the fractal set, these parameters are not independent of each other. The 2r bits are compressible as data to length ≈ r log 3.

slide-25
SLIDE 25

Example: Dimension of the Sierpinski triangle

00 01 10 11

01 11 00 11 01 11 We can think of the first bit and second bit at each recursion level as two parameters. 2r bits approximate a point within ≈ 2−r error. But for points within the fractal set, these parameters are not independent of each other. The 2r bits are compressible as data to length ≈ r log 3. In this sense, we only need log 3 ≈ 1.585 parameters to specify a point within the set.

slide-26
SLIDE 26

Algorithmic Information in Bit Strings

We need a formal notion of compressibility: The Kolmogorov complexity of a bit string σ ∈ {0, 1}∗ is the length of the shortest binary program that outputs σ: K(σ) = min

|π| : U(π) = σ ,

where U is a universal Turing machine.

slide-27
SLIDE 27

Algorithmic Information in Bit Strings

We need a formal notion of compressibility: The Kolmogorov complexity of a bit string σ ∈ {0, 1}∗ is the length of the shortest binary program that outputs σ: K(σ) = min

|π| : U(π) = σ ,

where U is a universal Turing machine.

◮ It matters little which U is chosen for this.

slide-28
SLIDE 28

Algorithmic Information in Bit Strings

We need a formal notion of compressibility: The Kolmogorov complexity of a bit string σ ∈ {0, 1}∗ is the length of the shortest binary program that outputs σ: K(σ) = min

|π| : U(π) = σ ,

where U is a universal Turing machine.

◮ It matters little which U is chosen for this. ◮ K(σ) = amount of algorithmic information in σ.

slide-29
SLIDE 29

Algorithmic Information in Bit Strings

We need a formal notion of compressibility: The Kolmogorov complexity of a bit string σ ∈ {0, 1}∗ is the length of the shortest binary program that outputs σ: K(σ) = min

|π| : U(π) = σ ,

where U is a universal Turing machine.

◮ It matters little which U is chosen for this. ◮ K(σ) = amount of algorithmic information in σ. ◮ K(σ) ≤ |σ| + o(|σ|).

slide-30
SLIDE 30

Algorithmic Information in Bit Strings

We need a formal notion of compressibility: The Kolmogorov complexity of a bit string σ ∈ {0, 1}∗ is the length of the shortest binary program that outputs σ: K(σ) = min

|π| : U(π) = σ ,

where U is a universal Turing machine.

◮ It matters little which U is chosen for this. ◮ K(σ) = amount of algorithmic information in σ. ◮ K(σ) ≤ |σ| + o(|σ|). ◮ Extends naturally to other finite data objects

◮ e.g., points in Qn

slide-31
SLIDE 31

Algorithmic Information in Euclidean Spaces

Points in Rn are infinite data objects.

slide-32
SLIDE 32

Algorithmic Information in Euclidean Spaces

Points in Rn are infinite data objects. The Kolmogorov complexity of a set E ⊆ Qn is K(E) = min{K(q) : q ∈ E} . (Shen and Vereschagin 2002)

slide-33
SLIDE 33

Algorithmic Information in Euclidean Spaces

Points in Rn are infinite data objects. The Kolmogorov complexity of a set E ⊆ Qn is K(E) = min{K(q) : q ∈ E} . (Shen and Vereschagin 2002) The Kolmogorov complexity of a set E ⊆ Rn is K(E) = K(E ∩ Qn) .

slide-34
SLIDE 34

Algorithmic Information in Euclidean Spaces

Points in Rn are infinite data objects. The Kolmogorov complexity of a set E ⊆ Qn is K(E) = min{K(q) : q ∈ E} . (Shen and Vereschagin 2002) The Kolmogorov complexity of a set E ⊆ Rn is K(E) = K(E ∩ Qn) . Note that E ⊆ F ⇒ K(E) ≥ K(F) .

slide-35
SLIDE 35

Algorithmic Information in Euclidean Spaces

Let x ∈ Rn and r ∈ N. The Kolmogorov complexity of x at precision r is Kr(x) = K

B2−r(x) ,

i.e., the number of bits required to specify some rational point q ∈ Qn such that |q − x| ≤ 2−r.

slide-36
SLIDE 36

Algorithmic Information in Euclidean Spaces

Let x ∈ Rn and r ∈ N. The Kolmogorov complexity of x at precision r is Kr(x) = K

B2−r(x) ,

i.e., the number of bits required to specify some rational point q ∈ Qn such that |q − x| ≤ 2−r. We say x is (algorithmically) random if Kr(x) ≥ nr − O(1). Fact: Almost all points are random.

slide-37
SLIDE 37

Algorithmic Dimension

At precision r, x ∈ Rn has information density 0 ≤ Kr(x) r ≤ n + o(1) .

slide-38
SLIDE 38

Algorithmic Dimension

At precision r, x ∈ Rn has information density 0 ≤ Kr(x) r ≤ n + o(1) .

  • J. Lutz and Mayordomo: The algorithmic dimension of x ∈ Rn is

dim(x) = lim inf

r→∞

Kr(x) r .

slide-39
SLIDE 39

Algorithmic Dimension

At precision r, x ∈ Rn has information density 0 ≤ Kr(x) r ≤ n + o(1) .

  • J. Lutz and Mayordomo: The algorithmic dimension of x ∈ Rn is

dim(x) = lim inf

r→∞

Kr(x) r . Examples:

◮ If x is computable, then there is a finite program that outputs

x precisely, so Kr(x) = O(1) and dim(x) = 0.

slide-40
SLIDE 40

Algorithmic Dimension

At precision r, x ∈ Rn has information density 0 ≤ Kr(x) r ≤ n + o(1) .

  • J. Lutz and Mayordomo: The algorithmic dimension of x ∈ Rn is

dim(x) = lim inf

r→∞

Kr(x) r . Examples:

◮ If x is computable, then there is a finite program that outputs

x precisely, so Kr(x) = O(1) and dim(x) = 0.

◮ If x ∈ Rn is random, then

nr − O(1) ≤ Kr(x) ≤ nr + o(r) , so dim(x) = n.

slide-41
SLIDE 41

Algorithmic Dimension

At precision r, x ∈ Rn has information density 0 ≤ Kr(x) r ≤ n + o(1) .

  • J. Lutz and Mayordomo: The algorithmic dimension of x ∈ Rn is

dim(x) = lim inf

r→∞

Kr(x) r . Examples:

◮ If x is computable, then there is a finite program that outputs

x precisely, so Kr(x) = O(1) and dim(x) = 0.

◮ If x ∈ Rn is random, then

nr − O(1) ≤ Kr(x) ≤ nr + o(r) , so dim(x) = n.

◮ The converse does not hold in either case.

slide-42
SLIDE 42

Aren’t points supposed to have dimension 0?

For the Sierpinski triangle T, we have dimH(T) = sup

x∈T

dim(x) .

slide-43
SLIDE 43

Aren’t points supposed to have dimension 0?

For the Sierpinski triangle T, we have dimH(T) = sup

x∈T

dim(x) . This relationship does not hold in general: Consider the singleton {y}, where y ∈ Rn is random. Then dimH({y}) = 0, but sup

x∈{y}

dim(x) = dim(y) = n .

slide-44
SLIDE 44

Aren’t points supposed to have dimension 0?

For the Sierpinski triangle T, we have dimH(T) = sup

x∈T

dim(x) . This relationship does not hold in general: Consider the singleton {y}, where y ∈ Rn is random. Then dimH({y}) = 0, but sup

x∈{y}

dim(x) = dim(y) = n . But we said dimension is the number of free parameters needed to specify a point given a description of the set. The universal machine reading our program to estimate x ∈ E

  • ught to have access to a description of E.
slide-45
SLIDE 45

Relative Dimension

The Kolmogorov complexity of a bitstring σ ∈ {0, 1}∗ relative to an oracle w ∈ {0, 1}∞ is Kw(σ) = min

|π| : Uw(π) = σ ,

where U is a universal oracle machine: It can query any bit of w as a computational step.

slide-46
SLIDE 46

Relative Dimension

The Kolmogorov complexity of a bitstring σ ∈ {0, 1}∗ relative to an oracle w ∈ {0, 1}∞ is Kw(σ) = min

|π| : Uw(π) = σ ,

where U is a universal oracle machine: It can query any bit of w as a computational step.

slide-47
SLIDE 47

Relative Dimension

The Kolmogorov complexity of a bitstring σ ∈ {0, 1}∗ relative to an oracle w ∈ {0, 1}∞ is Kw(σ) = min

|π| : Uw(π) = σ ,

where U is a universal oracle machine: It can query any bit of w as a computational step. The dimension of a point x ∈ Rn relative to oracle w is dimw(x) = lim inf

r→∞

Kw

r (x)

r .

slide-48
SLIDE 48

Relative Dimension

The Kolmogorov complexity of a bitstring σ ∈ {0, 1}∗ relative to an oracle w ∈ {0, 1}∞ is Kw(σ) = min

|π| : Uw(π) = σ ,

where U is a universal oracle machine: It can query any bit of w as a computational step. The dimension of a point x ∈ Rn relative to oracle w is dimw(x) = lim inf

r→∞

Kw

r (x)

r .

◮ Note that the oracle can encode a point in Rn.

slide-49
SLIDE 49

Relative Dimension

The Kolmogorov complexity of a bitstring σ ∈ {0, 1}∗ relative to an oracle w ∈ {0, 1}∞ is Kw(σ) = min

|π| : Uw(π) = σ ,

where U is a universal oracle machine: It can query any bit of w as a computational step. The dimension of a point x ∈ Rn relative to oracle w is dimw(x) = lim inf

r→∞

Kw

r (x)

r .

◮ Note that the oracle can encode a point in Rn. ◮ For all x ∈ Rn, dimx(x) = 0.

slide-50
SLIDE 50

Point-to-Set Principle (Lutz & Lutz ’17)

For every set E ⊆ Rn, dimH(E) = min

w

sup

x∈E

dimw(x) .

slide-51
SLIDE 51

Point-to-Set Principle (Lutz & Lutz ’17)

For every set E ⊆ Rn, dimH(E) = min

w

sup

x∈E

dimw(x) . classical Hausdorff dimension dimensions of individual points

slide-52
SLIDE 52

Point-to-Set Principle (Lutz & Lutz ’17)

For every set E ⊆ Rn, dimH(E) = min

w

sup

x∈E

dimw(x) . classical Hausdorff dimension dimensions of individual points ∴ In order to prove a lower bound dimH(E) ≥ α , it is enough to show that for every oracle w and ε > 0, there is some point x ∈ E with dimw(x) ≥ α − ε .

slide-53
SLIDE 53

Conditional Dimension

The conditional Kolomogorov complexity of p ∈ Qm given q ∈ Qn: K(p|q) = min

|π| : π ∈ {0, 1}∗ and U(π, q) = p .

slide-54
SLIDE 54

Conditional Dimension

The conditional Kolomogorov complexity of p ∈ Qm given q ∈ Qn: K(p|q) = min

|π| : π ∈ {0, 1}∗ and U(π, q) = p .

The conditional Kolmogorov complexity of E ⊆ Qm given F ⊆ Qn: K(E|F) = max

q∈F min p∈E K(p|q) .

slide-55
SLIDE 55

Conditional Dimension

The conditional Kolomogorov complexity of p ∈ Qm given q ∈ Qn: K(p|q) = min

|π| : π ∈ {0, 1}∗ and U(π, q) = p .

The conditional Kolmogorov complexity of E ⊆ Qm given F ⊆ Qn: K(E|F) = max

q∈F min p∈E K(p|q) .

The conditional Kolmogorov complexity of x ∈ Rm at precision y given y ∈ Rn at precision s: Kr,s(x|y) = K(B2−r(x)|B2−s(y)) .

slide-56
SLIDE 56

Conditional Dimension

Definition (Lutz & Lutz ’17)

The conditional dimension of x ∈ Rm given y ∈ Rn is dim(x|y) = lim inf

r→∞

Kr,r(x|y) r .

slide-57
SLIDE 57

Conditional Dimension

Definition (Lutz & Lutz ’17)

The conditional dimension of x ∈ Rm given y ∈ Rn is dim(x|y) = lim inf

r→∞

Kr,r(x|y) r .

◮ Obeys a chain rule: dim(x, y) ≥ dim(x|y) + dim(y). ◮ Bounded below by relative dimension: dim(x|y) ≥ dimy(x).

slide-58
SLIDE 58

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) . F E E × F Easy for Borel sets. Was significantly more difficult for general sets.

slide-59
SLIDE 59

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E × F) = sup

(x,y)∈E×F

dimw(x, y) ,

slide-60
SLIDE 60

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E × F) = sup

(x,y)∈E×F

dimw(x, y) , and for every ε > 0 there exist x ∈ E and y ∈ F such that dimw(x) ≥ dimH(E) − ε and dimw,x(y) ≥ dimH(F) − ε .

slide-61
SLIDE 61

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E × F) = sup

(x,y)∈E×F

dimw(x, y) , and for every ε > 0 there exist x ∈ E and y ∈ F such that dimw(x) ≥ dimH(E) − ε and dimw,x(y) ≥ dimH(F) − ε . For this x and y, dimH(E × F) ≥ dimw(x, y)

slide-62
SLIDE 62

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E × F) = sup

(x,y)∈E×F

dimw(x, y) , and for every ε > 0 there exist x ∈ E and y ∈ F such that dimw(x) ≥ dimH(E) − ε and dimw,x(y) ≥ dimH(F) − ε . For this x and y, dimH(E × F) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x)

slide-63
SLIDE 63

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E × F) = sup

(x,y)∈E×F

dimw(x, y) , and for every ε > 0 there exist x ∈ E and y ∈ F such that dimw(x) ≥ dimH(E) − ε and dimw,x(y) ≥ dimH(F) − ε . For this x and y, dimH(E × F) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x) ≥ dimw(x) + dimw,x(y)

slide-64
SLIDE 64

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E × F) = sup

(x,y)∈E×F

dimw(x, y) , and for every ε > 0 there exist x ∈ E and y ∈ F such that dimw(x) ≥ dimH(E) − ε and dimw,x(y) ≥ dimH(F) − ε . For this x and y, dimH(E × F) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x) ≥ dimw(x) + dimw,x(y) ≥ dimH(E) + dimH(F) − 2ε .

slide-65
SLIDE 65

Product Theorem (Marstrand 1954)

For all E ⊆ Rm and F ⊆ Rn, dimH(E × F) ≥ dimH(E) + dimH(F) .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E × F) = sup

(x,y)∈E×F

dimw(x, y) , and for every ε > 0 there exist x ∈ E and y ∈ F such that dimw(x) ≥ dimH(E) − ε and dimw,x(y) ≥ dimH(F) − ε . For this x and y, dimH(E × F) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x) ≥ dimw(x) + dimw,x(y) ≥ dimH(E) + dimH(F) − 2ε . Let ε → 0.

slide-66
SLIDE 66

Slicing Theorem (Marstrand 1954)

Let E ⊆ R2 be a Borel set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 . Ex E

slide-67
SLIDE 67

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

slide-68
SLIDE 68

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) ,

slide-69
SLIDE 69

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) , and for all ε > 0 and x ∈ R, there is a point (x, y) ∈ Ex such that dimw,x(x, y) ≥ dimH(Ex) − ε .

slide-70
SLIDE 70

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) , and for all ε > 0 and x ∈ R, there is a point (x, y) ∈ Ex such that dimw,x(x, y) ≥ dimH(Ex) − ε . Since (x, y) ∈ E, we have dimH(E) ≥ dimw(x, y)

slide-71
SLIDE 71

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) , and for all ε > 0 and x ∈ R, there is a point (x, y) ∈ Ex such that dimw,x(x, y) ≥ dimH(Ex) − ε . Since (x, y) ∈ E, we have dimH(E) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x)

slide-72
SLIDE 72

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) , and for all ε > 0 and x ∈ R, there is a point (x, y) ∈ Ex such that dimw,x(x, y) ≥ dimH(Ex) − ε . Since (x, y) ∈ E, we have dimH(E) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x) ≥ dimw(x) + dimw,x(y)

slide-73
SLIDE 73

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) , and for all ε > 0 and x ∈ R, there is a point (x, y) ∈ Ex such that dimw,x(x, y) ≥ dimH(Ex) − ε . Since (x, y) ∈ E, we have dimH(E) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x) ≥ dimw(x) + dimw,x(y) = dimw(x) + dimw,x(x, y)

slide-74
SLIDE 74

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) , and for all ε > 0 and x ∈ R, there is a point (x, y) ∈ Ex such that dimw,x(x, y) ≥ dimH(Ex) − ε . Since (x, y) ∈ E, we have dimH(E) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x) ≥ dimw(x) + dimw,x(y) = dimw(x) + dimw,x(x, y) ≥ dimw(x) + dimH(Ex) − ε .

slide-75
SLIDE 75

Slicing Theorem for Arbitrary Sets (N. Lutz ’16)

Let E ⊆ R2 be any set with dimH(E) ≥ 1, and let Ex be the vertical slice of E at x. Then for almost all x ∈ R, dimH(Ex) ≤ dimH(E) − 1 .

  • Proof. By the point-to-set principle, there is an oracle w such that

dimH(E) = sup

(x,y)∈E

dimw(x, y) , and for all ε > 0 and x ∈ R, there is a point (x, y) ∈ Ex such that dimw,x(x, y) ≥ dimH(Ex) − ε . Since (x, y) ∈ E, we have dimH(E) ≥ dimw(x, y) ≥ dimw(x) + dimw(y|x) ≥ dimw(x) + dimw,x(y) = dimw(x) + dimw,x(x, y) ≥ dimw(x) + dimH(Ex) − ε . Recall that dimw(x) = 1 for almost all x ∈ R, and let ε → 0.

slide-76
SLIDE 76

Conclusion

Algorithmic dimension provides a simple, intuitive, and powerful approach to problems in classical fractal geometry.

slide-77
SLIDE 77

Conclusion

Algorithmic dimension provides a simple, intuitive, and powerful approach to problems in classical fractal geometry.

◮ This approach has also been used to bound the dimension of

generalized Furstenberg sets (related to Kakeya sets).

slide-78
SLIDE 78

Conclusion

Algorithmic dimension provides a simple, intuitive, and powerful approach to problems in classical fractal geometry.

◮ This approach has also been used to bound the dimension of

generalized Furstenberg sets (related to Kakeya sets).

◮ Although the simple proofs in this work operated at the

“higher level” of dimension, that proof is significantly more involved and reasons about Kolmogorov complexity directly.

slide-79
SLIDE 79

Conclusion

Algorithmic dimension provides a simple, intuitive, and powerful approach to problems in classical fractal geometry.

◮ This approach has also been used to bound the dimension of

generalized Furstenberg sets (related to Kakeya sets).

◮ Although the simple proofs in this work operated at the

“higher level” of dimension, that proof is significantly more involved and reasons about Kolmogorov complexity directly.

◮ Objective: Further strengthen the connections between

geometric measure theory and algorithmic information theory, i.e., generalize and refine the point-to-set principle.

slide-80
SLIDE 80

Conclusion

Algorithmic dimension provides a simple, intuitive, and powerful approach to problems in classical fractal geometry.

◮ This approach has also been used to bound the dimension of

generalized Furstenberg sets (related to Kakeya sets).

◮ Although the simple proofs in this work operated at the

“higher level” of dimension, that proof is significantly more involved and reasons about Kolmogorov complexity directly.

◮ Objective: Further strengthen the connections between

geometric measure theory and algorithmic information theory, i.e., generalize and refine the point-to-set principle.

◮ Broader project: Systematically re-examine the foundations of

fractal geometry through this pointwise lens.