PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE Jan Dreier, - - PowerPoint PPT Presentation

preferential attachment graphs are somewhere dense
SMART_READER_LITE
LIVE PREVIEW

PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE Jan Dreier, - - PowerPoint PPT Presentation

PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE Jan Dreier, Philipp Kuinke , Peter Rossmanith TACO 2018 RWTH Aachen University MOTIVATION S parsity Nowhere dense r Locally bounded r Locally excluding Bounded expansion


slide-1
SLIDE 1

PREFERENTIAL ATTACHMENT GRAPHS ARE SOMEWHERE-DENSE

Jan Dreier, Philipp Kuinke, Peter Rossmanith

TACO 2018

RWTH Aachen University

slide-2
SLIDE 2

MOTIVATION

slide-3
SLIDE 3

Sparsity

Star forests Bounded treedepth Bounded treewidth Excluding a minor Excluding a topological minor Bounded expansion Outerplanar Planar Bounded genus Linear forests Bounded degree Locally bounded treewidth Locally excluding a minor Forests

r r

Locally bounded expansion Nowhere dense

∇ ∇

r

ω

Image by Felix Reidl

2

slide-4
SLIDE 4

Sparsity: r-shallow topological minor

3

slide-5
SLIDE 5

Sparsity: r-shallow topological minor

3

slide-6
SLIDE 6

Sparsity: r-shallow topological minor

G

▽ r : The set of all r-shallow topological minors of G.

3

slide-7
SLIDE 7

Sparsity: r-shallow topological minor

G

▽ r : The set of all r-shallow topological minors of G.

ω(G

▽ r) max

H∈G ▽ r ω(H)

(clique size)

3

slide-8
SLIDE 8

Sparsity

Definition (Nowhere-dense)

A graph class Gis nowhere-dense if there exists a function f , such that for all r and all G ∈ G, ω(G

▽ r) ≤ f (r).

4

slide-9
SLIDE 9

Sparsity

Definition (Nowhere-dense)

A graph class Gis nowhere-dense if there exists a function f , such that for all r and all G ∈ G, ω(G

▽ r) ≤ f (r). Definition (Somewhere-dense)

A graph class Gis somewhere-dense if for all functions f there exists an r and a G ∈ G, such that ω(G

▽ r) > f (r).

4

slide-10
SLIDE 10

Sparsity

Definition (Nowhere-dense)

A graph class Gis nowhere-dense if there exists a function f , such that for all r and all G ∈ G, ω(G

▽ r) ≤ f (r). Definition (Somewhere-dense)

A graph class Gis somewhere-dense if for all functions f there exists an r and a G ∈ G, such that ω(G

▽ r) > f (r).

Gis not nowhere-dense ⇔ Gis somewhere-dense

4

slide-11
SLIDE 11

Typical Properties of Complex Networks

5

slide-12
SLIDE 12

Typical Properties of Complex Networks

low diameter (small world property)

5

slide-13
SLIDE 13

Typical Properties of Complex Networks

low diameter (small world property) locally dense, but globally sparse

5

slide-14
SLIDE 14

Typical Properties of Complex Networks

low diameter (small world property) locally dense, but globally sparse heavy tail degree distribution

5

slide-15
SLIDE 15

Typical Properties of Complex Networks

low diameter (small world property) locally dense, but globally sparse heavy tail degree distribution clustering

5

slide-16
SLIDE 16

Typical Properties of Complex Networks

low diameter (small world property) locally dense, but globally sparse heavy tail degree distribution clustering community structure

5

slide-17
SLIDE 17

Typical Properties of Complex Networks

low diameter (small world property) locally dense, but globally sparse heavy tail degree distribution clustering community structure scale freeness

5

slide-18
SLIDE 18

Random Graphs

Random graphs with the goal of modeling real-world data: Mathematically analyzable Generation of infinite instances

6

slide-19
SLIDE 19

Sparse in the limit

Definition (a.a.s. nowhere-dense)

A random graph model Gis a.a.s. nowhere-dense if there exists a function f such that for all r lim

n→∞ P[ω(Gn

▽ r) ≤ f (r)] 1

where Gn is a random variable modeling a graph with n ver- tices randomly drawn from G.

7

slide-20
SLIDE 20

Sparse in the limit

Definition (a.a.s. somewhere-dense)

A random graph model Gis a.a.s. somewhere-dense if for all functions f there exists an r, such that lim

n→∞ P[ω(Gn

▽ r) > f (r)] 1

where Gn is a random variable modeling a graph with n ver- tices randomly drawn from G.

8

slide-21
SLIDE 21

Sparse in the limit (not as clear cut)

Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty:

9

slide-22
SLIDE 22

Sparse in the limit (not as clear cut)

Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty:

  • 1. p 1/n

9

slide-23
SLIDE 23

Sparse in the limit (not as clear cut)

Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty:

  • 1. p 1/n

→ a.a.s. nowhere-dense

9

slide-24
SLIDE 24

Sparse in the limit (not as clear cut)

Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty:

  • 1. p 1/n

→ a.a.s. nowhere-dense

  • 2. p 1 − 1/n

9

slide-25
SLIDE 25

Sparse in the limit (not as clear cut)

Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty:

  • 1. p 1/n

→ a.a.s. nowhere-dense

  • 2. p 1 − 1/n

→ a.a.s. somewhere-dense

9

slide-26
SLIDE 26

Sparse in the limit (not as clear cut)

Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty:

  • 1. p 1/n

→ a.a.s. nowhere-dense

  • 2. p 1 − 1/n

→ a.a.s. somewhere-dense

  • 3. p 1/2

9

slide-27
SLIDE 27

Sparse in the limit (not as clear cut)

Assume you have a random graph on n vertices, such that it is with probability p complete and with probability 1 − p empty:

  • 1. p 1/n

→ a.a.s. nowhere-dense

  • 2. p 1 − 1/n

→ a.a.s. somewhere-dense

  • 3. p 1/2

→ Neither!

9

slide-28
SLIDE 28

Preferential attachment graphs

“the rich get richer”, “preferential attachment”, “Barabási–Albert graphs” Start with some small fixed graph. Add vertices. Connect them to m vertices with a probability proportional to their degrees. Interesting properties: power law degree distribution scale free

10

slide-29
SLIDE 29

Preferential attachment graphs

m 2, n 100:

11

slide-30
SLIDE 30

Preferential attachment graphs

m 2, n 100: E[dn

m(vi)] ∼ m

  • n/i

11

slide-31
SLIDE 31

TAIL BOUNDS

slide-32
SLIDE 32

Degree Concentrations

Tail bounds exists for number of vertices with degree d. [Bollobás et al. 2001]

13

slide-33
SLIDE 33

Degree Concentrations

Tail bounds exists for number of vertices with degree d. [Bollobás et al. 2001] Via Martingales + Azuma-Hoeffding inequality

13

slide-34
SLIDE 34

Degree Concentrations

Tail bounds exists for number of vertices with degree d. [Bollobás et al. 2001] Via Martingales + Azuma-Hoeffding inequality Does not work for large d (i.e. order √n)

13

slide-35
SLIDE 35

Degree Concentrations

Tail bounds exists for number of vertices with degree d. [Bollobás et al. 2001] Via Martingales + Azuma-Hoeffding inequality Does not work for large d (i.e. order √n) But we need high degree vertices!

13

slide-36
SLIDE 36

Concentration of a single vertex

No vertex is sharply concentrated!

14

slide-37
SLIDE 37

Concentration of a single vertex

No vertex is sharply concentrated! P[dn

1 (vt) 1] 14

slide-38
SLIDE 38

Concentration of a single vertex

No vertex is sharply concentrated! P[dn

1 (vt) 1] n

  • it

(1 −

1 2i − 1)

14

slide-39
SLIDE 39

Concentration of a single vertex

No vertex is sharply concentrated! P[dn

1 (vt) 1] n

  • it

(1 −

1 2i − 1) ≥ 1 n

14

slide-40
SLIDE 40

Concentration of a single vertex

No vertex is sharply concentrated! P[dn

1 (vt) 1] n

  • it

(1 −

1 2i − 1) ≥ 1 n We can not hope for general exponential bounds.

14

slide-41
SLIDE 41

Concentration of a single vertex

p k 0.02 0.01 250 500

Distribution of d10000

1

(v1).

15

slide-42
SLIDE 42

Concentration of a single vertex

p k 0.02 0.01 250 500

Distribution of d10000

1

(v1) conditioned under d100

1 (v1) 18. 15

slide-43
SLIDE 43

Concentration of a single vertex

p k 0.02 0.01 250 500

Distribution of d10000

1

(v1) conditioned under d1000

1

(v1) 56.

15

slide-44
SLIDE 44

The rich stay rich

Theorem

Let 0 < ε ≤ 1/40, t, m, n ∈ N, t > 1

ε6 and S ⊆ {v1, . . . , vt}. Then

P

  • (1 − ε)
  • n

t dt

m(S) < dn m(S) < (1 + ε)

  • n

t dt

m(S) for all n ≥ t

  • dt

m(S)

  • ≥ 1 − ln(15t)eε−O(1)dt

m(S).

16

slide-45
SLIDE 45

The rich stay rich

Theorem (The approximate version)

Let ε ≥ 0, t, m, n ∈ N, and S ⊆ {v1, . . . , vt}: P

  • (1 − ε) E[dn

m(S)] < dn m(S) < (1 + ε) E[dn m(S)]

  • dt

m(S)

  • ≥ 1 − e−εdt

m(S)

16

slide-46
SLIDE 46

The rich stay rich

Theorem (The approximate version)

Let ε ≥ 0, t, m, n ∈ N, and S ⊆ {v1, . . . , vt}: P

  • (1 − ε) E[dn

m(S)] < dn m(S) < (1 + ε) E[dn m(S)]

  • dt

m(S)

  • ≥ 1 − e−εdt

m(S)

The rich stay rich

16

slide-47
SLIDE 47

The rich stay rich

Theorem (The approximate version)

Let ε ≥ 0, t, m, n ∈ N, and S ⊆ {v1, . . . , vt}: P

  • (1 − ε) E[dn

m(S)] < dn m(S) < (1 + ε) E[dn m(S)]

  • dt

m(S)

  • ≥ 1 − e−εdt

m(S)

The rich stay rich At first there is chaos

16

slide-48
SLIDE 48

The rich stay rich

Theorem (The approximate version)

Let ε ≥ 0, t, m, n ∈ N, and S ⊆ {v1, . . . , vt}: P

  • (1 − ε) E[dn

m(S)] < dn m(S) < (1 + ε) E[dn m(S)]

  • dt

m(S)

  • ≥ 1 − e−εdt

m(S)

The rich stay rich At first there is chaos If we have information for t we can better predict n > t

16

slide-49
SLIDE 49

Proving the theorem

17

slide-50
SLIDE 50

Proving the theorem

17

slide-51
SLIDE 51

Proving the theorem

17

slide-52
SLIDE 52

SOMEWHERE-DENSE

slide-53
SLIDE 53

Large cliques

Theorem

Gn

m contains a.a.s. a one-subdivided clique of size ∼ log(n). 19

slide-54
SLIDE 54

Large cliques

Theorem

Gn

m contains a.a.s. a one-subdivided clique of size ∼ log(n).

Corollary

Gn

m is a.a.s. somewhere-dense for m ≥ 2. 19

slide-55
SLIDE 55

How we get principals

k sets of vertices

20

slide-56
SLIDE 56

How we get principals

k sets of vertices If a set of s vertices has degree d one vertex has to have degree at least d/s.

20

slide-57
SLIDE 57

How we get principals

k sets of vertices If a set of s vertices has degree d one vertex has to have degree at least d/s. → Ensure with tail bounds it also has high degree in the future.

20

slide-58
SLIDE 58

Building cliques

21

slide-59
SLIDE 59

Building cliques

21

slide-60
SLIDE 60

Building cliques

21

slide-61
SLIDE 61

Building cliques

21

slide-62
SLIDE 62

Building cliques

21

slide-63
SLIDE 63

Building cliques

21

slide-64
SLIDE 64

Building cliques

21

slide-65
SLIDE 65

Building cliques

21

slide-66
SLIDE 66

Building cliques

21

slide-67
SLIDE 67

Building cliques

21

slide-68
SLIDE 68

Building cliques

21

slide-69
SLIDE 69

Building cliques

21

slide-70
SLIDE 70

Connecting principals: Why we need √ i

22

slide-71
SLIDE 71

Connecting principals: Why we need √ i

Step i:

i red

i blue remainder black Two balls drawn, success if red and blue

22

slide-72
SLIDE 72

Connecting principals: Why we need √ i

22

slide-73
SLIDE 73

Connecting principals: Why we need √ i

22

slide-74
SLIDE 74

Connecting principals: Why we need √ i

22

slide-75
SLIDE 75

Connecting principals: Why we need √ i

1 −

  • i10
  • 1 − 2

i i

2

1

22

slide-76
SLIDE 76

Connecting principals: Why we need √ i

1 −

  • i10
  • 1 − 2

i/ log(i) i

2 1

22

slide-77
SLIDE 77

CONCLUSION

slide-78
SLIDE 78

Conclusion

Tail bounds for vertices where we know an earlier degree

24

slide-79
SLIDE 79

Conclusion

Tail bounds for vertices where we know an earlier degree Tail bounds could be used to prove further structure

24

slide-80
SLIDE 80

Conclusion

Tail bounds for vertices where we know an earlier degree Tail bounds could be used to prove further structure Preferential Attachment graphs are a.a.s. somewhere-dense

24

slide-81
SLIDE 81

Conclusion

Tail bounds for vertices where we know an earlier degree Tail bounds could be used to prove further structure Preferential Attachment graphs are a.a.s. somewhere-dense FO-model checking algorithm not directly applicable

24

slide-82
SLIDE 82

Conclusion

Tail bounds for vertices where we know an earlier degree Tail bounds could be used to prove further structure Preferential Attachment graphs are a.a.s. somewhere-dense FO-model checking algorithm not directly applicable What about more general PA-graphs with δ parameter?

24

slide-83
SLIDE 83

Conclusion

Tail bounds for vertices where we know an earlier degree Tail bounds could be used to prove further structure Preferential Attachment graphs are a.a.s. somewhere-dense FO-model checking algorithm not directly applicable What about more general PA-graphs with δ parameter?

  • δ 0: Our model

24

slide-84
SLIDE 84

Conclusion

Tail bounds for vertices where we know an earlier degree Tail bounds could be used to prove further structure Preferential Attachment graphs are a.a.s. somewhere-dense FO-model checking algorithm not directly applicable What about more general PA-graphs with δ parameter?

  • δ 0: Our model
  • δ ∞: Uniform attachment

24

slide-85
SLIDE 85

References

Barabási, Albert-László and Réka Albert (1999). “Emergence of scaling in random networks”. In: Science 286.5439, pp. 509–512. Béla Bollobás Oliver Riordan, Joel Spencer and Gábor Tusnády (2001). “The Degree Sequence of a Scale-free Random Graph Process”. In: Random Struct. Algorithms 18.3, pp. 279–290. issn: 1042-9832. Martin Grohe, Stephan Kreutzer and Sebastian Siebertz (2017). “Deciding First-Order Properties of Nowhere Dense Graphs”. In: Journal of the ACM 64.3, p. 17. Nešetřil, Jaroslav and Patrice Ossona de Mendez (2012). Sparsity. Springer. Van Der Hofstad, Remco (2016). Random graphs and complex networks.

  • Vol. 1. Cambridge University Press.

25