Universal Behavior near Erd os-R enyi Lorenzo Sadun University of - - PowerPoint PPT Presentation

universal behavior near erd os r enyi
SMART_READER_LITE
LIVE PREVIEW

Universal Behavior near Erd os-R enyi Lorenzo Sadun University of - - PowerPoint PPT Presentation

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals Universal Behavior near Erd os-R enyi Lorenzo Sadun University of Texas at Austin ICERM; February 11, 2015 Joint work with Rick Kenyon,


slide-1
SLIDE 1

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Universal Behavior near Erd˝

  • s-R´

enyi

Lorenzo Sadun

University of Texas at Austin

ICERM; February 11, 2015 Joint work with Rick Kenyon, Charles Radin and Kui Ren

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-2
SLIDE 2

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Outline

1

Recap of graphs and graphons

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-3
SLIDE 3

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Outline

1

Recap of graphs and graphons

2

Results

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-4
SLIDE 4

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Outline

1

Recap of graphs and graphons

2

Results

3

Strategy

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-5
SLIDE 5

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Outline

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-6
SLIDE 6

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Outline

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-7
SLIDE 7

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Outline

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

6

Tradeoff between goals

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-8
SLIDE 8

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Table of Contents

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

6

Tradeoff between goals

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-9
SLIDE 9

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Graphons and Densities

Graphon is a map g : [0, 1]2 → [0, 1] with g(x, y) = g(y, x).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-10
SLIDE 10

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Graphons and Densities

Graphon is a map g : [0, 1]2 → [0, 1] with g(x, y) = g(y, x). Edge density e(g) =

  • g(x, y)dx dy.

Triangle density t(g) =

  • g(x, y)g(y, z)g(z, x)dx dy dz.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-11
SLIDE 11

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Graphons and Densities

Graphon is a map g : [0, 1]2 → [0, 1] with g(x, y) = g(y, x). Edge density e(g) =

  • g(x, y)dx dy.

Triangle density t(g) =

  • g(x, y)g(y, z)g(z, x)dx dy dz.

Graphon entropy s(g) = −

  • I0(g(x, y))dx dy.

I0(u) = 1 2 [u ln(u) + (1 − u) ln(1 − u)].

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-12
SLIDE 12

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Counting entropy

S(e0, t0) measures how many graphs have edge density e0 and triangle density t0

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-13
SLIDE 13

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Counting entropy

S(e0, t0) measures how many graphs have edge density e0 and triangle density t0 S(e0, t0) = max{s(g)|e(g) = e0, t(g) = t0}.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-14
SLIDE 14

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Counting entropy

S(e0, t0) measures how many graphs have edge density e0 and triangle density t0 S(e0, t0) = max{s(g)|e(g) = e0, t(g) = t0}. For fixed e, S(e, t) maximized by Erd˝

  • s-R´

enyi graphon g(x, y) = e at t = e3. S(e, e3) = −I0(e).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-15
SLIDE 15

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Counting entropy

S(e0, t0) measures how many graphs have edge density e0 and triangle density t0 S(e0, t0) = max{s(g)|e(g) = e0, t(g) = t0}. For fixed e, S(e, t) maximized by Erd˝

  • s-R´

enyi graphon g(x, y) = e at t = e3. S(e, e3) = −I0(e). What happens when t is only close to e3?

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-16
SLIDE 16

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Table of Contents

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

6

Tradeoff between goals

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-17
SLIDE 17

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Phase portrait for edge-triangle model

edge density ǫ (0,0) density τ τ = ǫ3/2 triangle τ = ǫ(2ǫ − 1)

scallop

(1/2,0)

R

(1,1)

0.5 1 0.2 0.4 0.6 0.8 1 II I III

Schematic Profile and Phase Portrait

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-18
SLIDE 18

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I’m going to show you:

As t → e3 from above, S(e, e3) − S(e, t) goes as (t − e3)1.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-19
SLIDE 19

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I’m going to show you:

As t → e3 from above, S(e, e3) − S(e, t) goes as (t − e3)1. As t → e3 from below, S(e, e3) − S(e, t) goes as (e3 − t)2/3.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-20
SLIDE 20

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I’m going to show you:

As t → e3 from above, S(e, e3) − S(e, t) goes as (t − e3)1. As t → e3 from below, S(e, e3) − S(e, t) goes as (e3 − t)2/3. The optimizing graphon takes a specific form just above the ER curve.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-21
SLIDE 21

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I’m going to show you:

As t → e3 from above, S(e, e3) − S(e, t) goes as (t − e3)1. As t → e3 from below, S(e, e3) − S(e, t) goes as (e3 − t)2/3. The optimizing graphon takes a specific form just above the ER curve. The results above ER are universal, and apply to any model with edge density and one other graph density. Other graph can be triangle, k-star, Kn, anything.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-22
SLIDE 22

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I’m going to show you:

As t → e3 from above, S(e, e3) − S(e, t) goes as (t − e3)1. As t → e3 from below, S(e, e3) − S(e, t) goes as (e3 − t)2/3. The optimizing graphon takes a specific form just above the ER curve. The results above ER are universal, and apply to any model with edge density and one other graph density. Other graph can be triangle, k-star, Kn, anything. Below ER, S(e, e3) − S(e, t) always goes as (e3 − t)2/n for some n > 2. (Generically n = 3, but not always.)

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-23
SLIDE 23

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I’m going to show you:

As t → e3 from above, S(e, e3) − S(e, t) goes as (t − e3)1. As t → e3 from below, S(e, e3) − S(e, t) goes as (e3 − t)2/3. The optimizing graphon takes a specific form just above the ER curve. The results above ER are universal, and apply to any model with edge density and one other graph density. Other graph can be triangle, k-star, Kn, anything. Below ER, S(e, e3) − S(e, t) always goes as (e3 − t)2/n for some n > 2. (Generically n = 3, but not always.) Caveat: for some graphs, density is minimized at ER. In those models, results below ER are moot.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-24
SLIDE 24

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

S(e, t) for fixed e

S(e,t) t e3 Not

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-25
SLIDE 25

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Graphon just above ER curve

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Graphon corresponding to t= 0.2201 e= 0.6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-26
SLIDE 26

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Table of Contents

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

6

Tradeoff between goals

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-27
SLIDE 27

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Goals

For t ≈ e3, pick g(x, y) = e + ∆g(x, y) to

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-28
SLIDE 28

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Goals

For t ≈ e3, pick g(x, y) = e + ∆g(x, y) to Get the most |∆t| for the least |∆S|.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-29
SLIDE 29

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Goals

For t ≈ e3, pick g(x, y) = e + ∆g(x, y) to Get the most |∆t| for the least |∆S|. Break into sub-goals:

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-30
SLIDE 30

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Goals

For t ≈ e3, pick g(x, y) = e + ∆g(x, y) to Get the most |∆t| for the least |∆S|. Break into sub-goals:

Get the most |∆t| for the least ∆g. Get the most ∆g for the least ∆S.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-31
SLIDE 31

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Goals

For t ≈ e3, pick g(x, y) = e + ∆g(x, y) to Get the most |∆t| for the least |∆S|. Break into sub-goals:

Get the most |∆t| for the least ∆g. Get the most ∆g for the least ∆S.

Rewrite sub-goals as:

Maximize |∆t| for fixed ∆g. Minimize |∆S| for fixed ∆g.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-32
SLIDE 32

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Use the L2 norm

Treat g and ∆g as integral kernels of operators on L2([0, 1]).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-33
SLIDE 33

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Use the L2 norm

Treat g and ∆g as integral kernels of operators on L2([0, 1]). ∆g2 =

  • (∆g(x, y))2 dx dy

=

  • ∆g(x, y)∆g(y, x)dx dy

= Tr(∆g2).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-34
SLIDE 34

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Table of Contents

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

6

Tradeoff between goals

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-35
SLIDE 35

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize |∆t| for fixed ∆g.

t = Tr(g3) = Tr((g0 + ∆g)3)

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-36
SLIDE 36

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize |∆t| for fixed ∆g.

t = Tr(g3) = Tr((g0 + ∆g)3) = e3 + 3e2

  • ∆g(x, y)dx dy

+3e

  • ∆g(x, y)∆g(x, z)dx dy dz + Tr((∆g)3)

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-37
SLIDE 37

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize |∆t| for fixed ∆g.

t = Tr(g3) = Tr((g0 + ∆g)3) = e3 + 3e2

  • ∆g(x, y)dx dy

+3e

  • ∆g(x, y)∆g(x, z)dx dy dz + Tr((∆g)3)

= e3 + 3e 2 h(x)2dx + Tr((∆g)3)

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-38
SLIDE 38

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize |∆t| for fixed ∆g.

t = Tr(g3) = Tr((g0 + ∆g)3) = e3 + 3e2

  • ∆g(x, y)dx dy

+3e

  • ∆g(x, y)∆g(x, z)dx dy dz + Tr((∆g)3)

= e3 + 3e 2 h(x)2dx + Tr((∆g)3) Where h(x) := 1 ∆g(x, y)dy.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-39
SLIDE 39

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t < 0

∆t = 3e 1 h(x)2dx + Tr((∆g)3).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-40
SLIDE 40

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t < 0

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) = 0 and ∆g of rank 1.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-41
SLIDE 41

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t < 0

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) = 0 and ∆g of rank 1. ∆g(x, y) = −να(x)α(y). Normalization: 1 α(x)dx = 0, 1 α(x)2dx = 1.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-42
SLIDE 42

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t < 0

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) = 0 and ∆g of rank 1. ∆g(x, y) = −να(x)α(y). Normalization: 1 α(x)dx = 0, 1 α(x)2dx = 1. With this choice, ∆t = −ν3, ∆g = ν = |∆t|1/3.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-43
SLIDE 43

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t < 0

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) = 0 and ∆g of rank 1. ∆g(x, y) = −να(x)α(y). Normalization: 1 α(x)dx = 0, 1 α(x)2dx = 1. With this choice, ∆t = −ν3, ∆g = ν = |∆t|1/3. For all choices of ∆g, ∆g ≥ |∆t|1/3.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-44
SLIDE 44

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t < 0

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) = 0 and ∆g of rank 1. ∆g(x, y) = −να(x)α(y). Normalization: 1 α(x)dx = 0, 1 α(x)2dx = 1. With this choice, ∆t = −ν3, ∆g = ν = |∆t|1/3. For all choices of ∆g, ∆g ≥ |∆t|1/3. What should α(x) be?

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-45
SLIDE 45

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t < 0

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) = 0 and ∆g of rank 1. ∆g(x, y) = −να(x)α(y). Normalization: 1 α(x)dx = 0, 1 α(x)2dx = 1. With this choice, ∆t = −ν3, ∆g = ν = |∆t|1/3. For all choices of ∆g, ∆g ≥ |∆t|1/3. What should α(x) be? IT DOESN’T MATTER!

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-46
SLIDE 46

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t > 0.

∆t = 3e 1 h(x)2dx + Tr((∆g)3).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-47
SLIDE 47

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t > 0.

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) as big as possible. Optimum is rank-2: ∆g(x, y) = h(x) + h(y). (This makes Tr((∆g)3) = 0.)

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-48
SLIDE 48

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t > 0.

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) as big as possible. Optimum is rank-2: ∆g(x, y) = h(x) + h(y). (This makes Tr((∆g)3) = 0.) ∆g2 = 2 1 h(x)2dx = 2 3e ∆t.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-49
SLIDE 49

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t > 0.

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) as big as possible. Optimum is rank-2: ∆g(x, y) = h(x) + h(y). (This makes Tr((∆g)3) = 0.) ∆g2 = 2 1 h(x)2dx = 2 3e ∆t. ∆g goes as |∆t|1/2.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-50
SLIDE 50

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t > 0.

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) as big as possible. Optimum is rank-2: ∆g(x, y) = h(x) + h(y). (This makes Tr((∆g)3) = 0.) ∆g2 = 2 1 h(x)2dx = 2 3e ∆t. ∆g goes as |∆t|1/2. What should h(x) be?

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-51
SLIDE 51

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Analysis when ∆t > 0.

∆t = 3e 1 h(x)2dx + Tr((∆g)3). Want h(x) as big as possible. Optimum is rank-2: ∆g(x, y) = h(x) + h(y). (This makes Tr((∆g)3) = 0.) ∆g2 = 2 1 h(x)2dx = 2 3e ∆t. ∆g goes as |∆t|1/2. What should h(x) be? IT DOESN’T MATTER!

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-52
SLIDE 52

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Estimates

When ∆t < 0, ∆g ≥ |∆t|1/3, with equality exactly when ∆g(x, y) = −να(x)α(y) for some α that integrates to zero.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-53
SLIDE 53

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Estimates

When ∆t < 0, ∆g ≥ |∆t|1/3, with equality exactly when ∆g(x, y) = −να(x)α(y) for some α that integrates to zero. When ∆t > 0, ∆t < 3e 2 ∆g2 + ∆g3.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-54
SLIDE 54

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Summary for the first goal

For ∆t < 0, optimal ∆g is rank 1, ∆g = |∆t|1/3.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-55
SLIDE 55

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Summary for the first goal

For ∆t < 0, optimal ∆g is rank 1, ∆g = |∆t|1/3. For ∆t > 0, optimal ∆g is rank 2 to lowest order, ∆g goes as |∆t|1/2.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-56
SLIDE 56

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Summary for the first goal

For ∆t < 0, optimal ∆g is rank 1, ∆g = |∆t|1/3. For ∆t > 0, optimal ∆g is rank 2 to lowest order, ∆g goes as |∆t|1/2. Optimal solutions involve arbitrary functions α(x) or h(x). Picking the right function is at the heart of second goal.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-57
SLIDE 57

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Universality

If we replace triangle with another graph, analysis for is almost the same.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-58
SLIDE 58

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Universality

If we replace triangle with another graph, analysis for is almost the same. t is integral of a polynomial in g = g0 + ∆g. Quadratic term is always a positive multiple of 1 h(x)2dx.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-59
SLIDE 59

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Universality

If we replace triangle with another graph, analysis for is almost the same. t is integral of a polynomial in g = g0 + ∆g. Quadratic term is always a positive multiple of 1 h(x)2dx. With one class of exception, all higher-order terms are bounded by ∆g3.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-60
SLIDE 60

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Universality

If we replace triangle with another graph, analysis for is almost the same. t is integral of a polynomial in g = g0 + ∆g. Quadratic term is always a positive multiple of 1 h(x)2dx. With one class of exception, all higher-order terms are bounded by ∆g3. Exception is harmless, but requires a little more work.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-61
SLIDE 61

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Table of Contents

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

6

Tradeoff between goals

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-62
SLIDE 62

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize s(g) for fixed ∆g2.

Exact same problem for triangle or other graph.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-63
SLIDE 63

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize s(g) for fixed ∆g2.

Exact same problem for triangle or other graph. Same as maximizing s(g) for fixed (e − 1 2)2 + ∆g2.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-64
SLIDE 64

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize s(g) for fixed ∆g2.

Exact same problem for triangle or other graph. Same as maximizing s(g) for fixed (e − 1 2)2 + ∆g2. Same as maximizing s(g) for fixed g1/22, where g1/2(x, y) = g(x, y) − 1/2,

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-65
SLIDE 65

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Maximize s(g) for fixed ∆g2.

Exact same problem for triangle or other graph. Same as maximizing s(g) for fixed (e − 1 2)2 + ∆g2. Same as maximizing s(g) for fixed g1/22, where g1/2(x, y) = g(x, y) − 1/2, Claim: That’s achieved when g1/2(x, y)2 is constant.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-66
SLIDE 66

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

A little Taylor Series

I0(u) = 1 2(u ln(u) + (1 − u) ln(1 − u)).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-67
SLIDE 67

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

A little Taylor Series

I0(u) = 1 2(u ln(u) + (1 − u) ln(1 − u)). Even function of u − 1 2.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-68
SLIDE 68

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

A little Taylor Series

I0(u) = 1 2(u ln(u) + (1 − u) ln(1 − u)). Even function of u − 1 2. All even derivatives at u = 1/2 are positive.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-69
SLIDE 69

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

A little Taylor Series

I0(u) = 1 2(u ln(u) + (1 − u) ln(1 − u)). Even function of u − 1 2. All even derivatives at u = 1/2 are positive. I(u) is power series in (u − 1 2)2 with positive coefficients (except constant term).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-70
SLIDE 70

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

A little Taylor Series

I0(u) = 1 2(u ln(u) + (1 − u) ln(1 − u)). Even function of u − 1 2. All even derivatives at u = 1/2 are positive. I(u) is power series in (u − 1 2)2 with positive coefficients (except constant term). I(u) is concave up as a function of (u − 1 2)2.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-71
SLIDE 71

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

A little Taylor Series

I0(u) = 1 2(u ln(u) + (1 − u) ln(1 − u)). Even function of u − 1 2. All even derivatives at u = 1/2 are positive. I(u) is power series in (u − 1 2)2 with positive coefficients (except constant term). I(u) is concave up as a function of (u − 1 2)2. Since

  • (g − 1

2)2 is fixed, s(g) is maximized by taking (g − 1 2)2 constant.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-72
SLIDE 72

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Estimates

|∆S| is bounded below by a power series in ∆g2 with (computable)l positive coefficients.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-73
SLIDE 73

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Estimates

|∆S| is bounded below by a power series in ∆g2 with (computable)l positive coefficients. When ∆t < 0, |∆S| is bounded below by power series in |∆t|2/3 with positive coefficients.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-74
SLIDE 74

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Estimates

|∆S| is bounded below by a power series in ∆g2 with (computable)l positive coefficients. When ∆t < 0, |∆S| is bounded below by power series in |∆t|2/3 with positive coefficients. When ∆t > 0, |∆S| is bounded below by power series in |∆t| with positive coefficients.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-75
SLIDE 75

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

Table of Contents

1

Recap of graphs and graphons

2

Results

3

Strategy

4

First goal

5

Second goal

6

Tradeoff between goals

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-76
SLIDE 76

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

The special case of e = 1/2, t < 1/8

Can achieve both goals exactly. α(x) = ±1.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-77
SLIDE 77

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

The special case of e = 1/2, t < 1/8

Can achieve both goals exactly. α(x) = ±1.

e + ν e + ν e − ν e − ν 1/2 1/2 1 1

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-78
SLIDE 78

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What happens when e = 1/2, t just under e3?

Can’t satisfy both goals exactly. But can satisfy first and come close on second. When e < 1/2, best seems to be symmetric bipodal.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-79
SLIDE 79

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What happens when e = 1/2, t just under e3?

Can’t satisfy both goals exactly. But can satisfy first and come close on second. When e < 1/2, best seems to be symmetric bipodal. When e > 1/2, best seems to be asymmetric bpodal.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-80
SLIDE 80

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What happens when t just over e3?

Compromise between ∆g(x, y) = h(x) + h(y) and |g(x, y) − 1/2|2 constant.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-81
SLIDE 81

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What happens when t just over e3?

Compromise between ∆g(x, y) = h(x) + h(y) and |g(x, y) − 1/2|2 constant.

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Graphon corresponding to t= 0.2201 e= 0.6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-82
SLIDE 82

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What happens when t just over e3?

Compromise between ∆g(x, y) = h(x) + h(y) and |g(x, y) − 1/2|2 constant.

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Graphon corresponding to t= 0.2201 e= 0.6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Failure is only on tiny square of area ∼ ∆g4.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-83
SLIDE 83

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I showed you:

For edge-triangle, as t → e3 from above, |∆S| ∼ |∆t|1. For edge-triangle, as t → e3 from below, |∆S| ∼ |∆t|2/3. For edge-triangle, the optimizing graphon takes a specific form just above the ER curve. Highly asymmetric bipodal. ∆g(x, y) ≈ h(x) + h(y).

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi

slide-84
SLIDE 84

Recap of graphs and graphons Results Strategy First goal Second goal Tradeoff between goals

What I showed you:

For edge-triangle, as t → e3 from above, |∆S| ∼ |∆t|1. For edge-triangle, as t → e3 from below, |∆S| ∼ |∆t|2/3. For edge-triangle, the optimizing graphon takes a specific form just above the ER curve. Highly asymmetric bipodal. ∆g(x, y) ≈ h(x) + h(y). The results above ER are universal, and apply to any model with edge density and one other graph density. Below ER, |∆S| always goes as |∆t|2/n for some n > 2, since negative terms in |∆t| are cubic or higher in ∆g.

Lorenzo Sadun Universal Behavior near Erd˝

  • s-R´

enyi