Breaking the Linear-Memory Barrier in Massively Parallel Computing - - PowerPoint PPT Presentation

โ–ถ
breaking the linear memory barrier
SMART_READER_LITE
LIVE PREVIEW

Breaking the Linear-Memory Barrier in Massively Parallel Computing - - PowerPoint PPT Presentation

Breaking the Linear-Memory Barrier in Massively Parallel Computing MIS on Trees with Strongly Sublinear Memory Sebastian Brandt, Man anuela la Fis Fischer, Jara Uitto ETH Zurich Model: Massively Parallel Computing (MPC) Model: Massively


slide-1
SLIDE 1

Breaking the Linear-Memory Barrier in Massively Parallel Computing

MIS on Trees with Strongly Sublinear Memory

Sebastian Brandt, Man anuela la Fis Fischer, Jara Uitto ETH Zurich

slide-2
SLIDE 2

Massively Parallel Computing (MPC)

Model:

slide-3
SLIDE 3

Massively Parallel Computing (MPC)

Model:

parallel computing framework inspired by MapReduce

slide-4
SLIDE 4

Massively Parallel Computing (MPC)

Model:

parallel computing framework inspired by MapReduce

slide-5
SLIDE 5

Massively Parallel Computing (MPC)

Model:

Karloff, Suri, Vassilvitskii [SODAโ€™10]

parallel computing framework inspired by MapReduce

slide-6
SLIDE 6

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’)

slide-7
SLIDE 7

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines

slide-8
SLIDE 8

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines

slide-9
SLIDE 9

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines

slide-10
SLIDE 10

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines

slide-11
SLIDE 11

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines

slide-12
SLIDE 12

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines Complexity: number of rounds

slide-13
SLIDE 13

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines Complexity: number of rounds

slide-14
SLIDE 14

Massively Parallel Computing (MPC) Model

Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines Complexity: number of rounds ๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’)

slide-15
SLIDE 15

Massively Parallel Computing (MPC) Model

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Synchronous Rounds

  • 1. Local Computation

at every machine

  • 2. Global Communication

between machines Complexity: number of rounds

slide-16
SLIDE 16

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’)

slide-17
SLIDE 17

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’)

slide-18
SLIDE 18

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’)

slide-19
SLIDE 19

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-20
SLIDE 20

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-21
SLIDE 21

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-22
SLIDE 22

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-23
SLIDE 23

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) usual assumption unrealistic โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-24
SLIDE 24

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-25
SLIDE 25

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-26
SLIDE 26

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-27
SLIDE 27

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11]

slide-28
SLIDE 28

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11] for most problems,

  • nly direct simulation of

LOCAL/PRAM algorithms known

slide-29
SLIDE 29

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11] for most problems,

  • nly direct simulation of

LOCAL/PRAM algorithms known

slide-30
SLIDE 30

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes. Algorithms have been stuck at this linear-memory barrier!

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11] for most problems,

  • nly direct simulation of

LOCAL/PRAM algorithms known

slide-31
SLIDE 31

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

Local Memory in MPC

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. usual assumption

  • ften unrealistic

โ–ช เทฉ O(๐‘œ) prohibitively large โ–ช sparse graphs trivial Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes. Algorithms have been stuck at this linear-memory barrier! Fundamentally?

  • ften trivial

for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ€™11] for most problems,

  • nly direct simulation of

LOCAL/PRAM algorithms known

slide-32
SLIDE 32

Breaking the Linear-Memory Barrier:

slide-33
SLIDE 33

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier:

slide-34
SLIDE 34

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds

slide-35
SLIDE 35

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Ghaffari, Kuhn, Uitto [FOCSโ€™19] Conditional Lower Bound ฮฉ log log ๐‘œ rounds

slide-36
SLIDE 36

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds

slide-37
SLIDE 37

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-38
SLIDE 38

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-39
SLIDE 39

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-40
SLIDE 40

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-41
SLIDE 41

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-42
SLIDE 42

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-43
SLIDE 43

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-44
SLIDE 44

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-45
SLIDE 45

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-46
SLIDE 46

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

best we can hope for GKU [FOCSโ€™19]

slide-47
SLIDE 47

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-48
SLIDE 48

Efficient MPC Graph Algorithms with Strongly Sublinear Memory

Breaking the Linear-Memory Barrier: ๐‘ป = ๐‘ท ๐’๐œบ local memory ๐‘ต = ๐‘ท ๐’/๐’๐œบ machines poly lo log lo log ๐’ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โ–ช exponentially faster than LOCAL algorithms due to shortcuts โ–ช polynomially less memory than most MPC algorithms

slide-49
SLIDE 49

Maximal Independent Set (MIS)

Problem:

slide-50
SLIDE 50

Maximal Independent Set (MIS)

slide-51
SLIDE 51

Maximal Independent Set (MIS)

slide-52
SLIDE 52

Maximal Independent Set (MIS)

slide-53
SLIDE 53

Maximal Independent Set (MIS)

Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence

slide-54
SLIDE 54

Maximal Independent Set (MIS)

Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence

slide-55
SLIDE 55

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-56
SLIDE 56

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1 Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-57
SLIDE 57

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Ghaffari et al. [PODCโ€™18] ๐‘ƒ(log log ๐‘œ) Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1 Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-58
SLIDE 58

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Lubyโ€™s Algorithm ๐‘ƒ(log ๐‘œ) Ghaffari et al. [PODCโ€™18] ๐‘ƒ(log log ๐‘œ) Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1 Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-59
SLIDE 59

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Lubyโ€™s Algorithm ๐‘ƒ(log ๐‘œ) Ghaffari et al. [PODCโ€™18] ๐‘ƒ(log log ๐‘œ) Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1 Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-60
SLIDE 60

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art on Trees

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Lubyโ€™s Algorithm ๐‘ƒ(log ๐‘œ) Ghaffari et al. [PODCโ€™18] ๐‘ƒ(log log ๐‘œ) Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1 Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-61
SLIDE 61

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art on Trees

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Lubyโ€™s Algorithm ๐‘ƒ(log ๐‘œ) Ghaffari et al. [PODCโ€™18] ๐‘ƒ(log log ๐‘œ) Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1 Trivial solution ๐‘ƒ 1 Trivial solution ๐‘ƒ 1 Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-62
SLIDE 62

เทฉ ฮ˜(๐‘œ) เทฉ ฮฉ ๐‘œ1+๐œ€ เทฉ O ๐‘œ๐œ€

S

MIS: State of the Art on Trees

๐‘ต machines ๐‘ป memory per machine ๐‘ต โ‹… ๐‘ป = เทฉ ๐‘ท (๐’ + ๐’) Strongly Sublinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ , 0 โ‰ค ๐œ€ < 1 No machine sees all nodes. Linear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ Machines see all nodes. Lubyโ€™s Algorithm ๐‘ƒ(log ๐‘œ) Ghaffari et al. [PODCโ€™18] ๐‘ƒ(log log ๐‘œ) Ghaffari and Uitto [SODAโ€™19] เทจ ๐‘ƒ log ๐‘œ Lattanzi et al. [SPAAโ€™11] ๐‘ƒ 1 Trivial solution ๐‘ƒ 1 Our Result ๐‘ƒ log3 log ๐‘œ Trivial solution ๐‘ƒ 1 Superlinear Memory: ๐‘‡ = เทจ ๐‘ƒ ๐‘œ1+๐œ€ , 0 < ๐œ€ โ‰ค 1 Machines see all nodes.

slide-63
SLIDE 63

Our Result

๐‘ท(๐ฆ๐ฉ๐ก๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’)-round MPC algorithm with ๐“ = เทฉ ๐‘ท ๐’๐œบ memory that w.h.p. computes MIS on trees.

slide-64
SLIDE 64

Our Result

๐‘ท(๐ฆ๐ฉ๐ก๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’)-round MPC algorithm with ๐“ = เทฉ ๐‘ท ๐’๐œบ memory that w.h.p. computes MIS on trees. เทฉ ๐‘ท ๐ฆ๐ฉ๐ก ๐’ rounds ๐“ = เทฉ ๐‘ท ๐’๐œบ memory

Ghaffari and Uitto [SODAโ€™19]

slide-65
SLIDE 65

Our Result

๐‘ท(๐ฆ๐ฉ๐ก๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’)-round MPC algorithm with ๐“ = เทฉ ๐‘ท ๐’๐œบ memory that w.h.p. computes MIS on trees. เทฉ ๐‘ท ๐ฆ๐ฉ๐ก ๐’ rounds ๐“ = เทฉ ๐‘ท ๐’๐œบ memory

Ghaffari and Uitto [SODAโ€™19]

๐‘ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐’ rounds ๐“ = เทฉ ๐‘ท ๐’ memory

Ghaffari et al. [PODCโ€™18]

slide-66
SLIDE 66

Our Result

๐‘ท(๐ฆ๐ฉ๐ก๐Ÿ’ ๐ฆ๐ฉ๐ก ๐’)-round MPC algorithm with ๐“ = เทฉ ๐‘ท ๐’๐œบ memory that w.h.p. computes MIS on trees. เทฉ ๐‘ท ๐ฆ๐ฉ๐ก ๐’ rounds ๐“ = เทฉ ๐‘ท ๐’๐œบ memory

Ghaffari and Uitto [SODAโ€™19]

๐‘ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐’ rounds ๐“ = เทฉ ๐‘ท ๐’ memory

Ghaffari et al. [PODCโ€™18]

Conditional ๐›(๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐’)-round lower bound for ๐“ = เทฉ ๐‘ท ๐’๐œบ

Ghaffari, Kuhn, and Uitto [FOCSโ€™19]

slide-67
SLIDE 67

Algorithm

slide-68
SLIDE 68

Algorithm Outline

slide-69
SLIDE 69

Algorithm Outline

slide-70
SLIDE 70

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

slide-71
SLIDE 71

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-72
SLIDE 72

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-73
SLIDE 73

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-74
SLIDE 74

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-75
SLIDE 75

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-76
SLIDE 76

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-77
SLIDE 77

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-78
SLIDE 78

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-79
SLIDE 79

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-80
SLIDE 80

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

main LOCAL technique Beck [RSAโ€™91]

slide-81
SLIDE 81

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

slide-82
SLIDE 82

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

slide-83
SLIDE 83

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

slide-84
SLIDE 84

Subsample-and-Conquer

Polynomial Degree Reduction:

slide-85
SLIDE 85

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-86
SLIDE 86

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-87
SLIDE 87

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-88
SLIDE 88

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-89
SLIDE 89

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-90
SLIDE 90

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-91
SLIDE 91

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-92
SLIDE 92

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-93
SLIDE 93

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-94
SLIDE 94

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-95
SLIDE 95

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-96
SLIDE 96

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-97
SLIDE 97

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-98
SLIDE 98

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring

slide-99
SLIDE 99

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring

slide-100
SLIDE 100

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-101
SLIDE 101

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-102
SLIDE 102

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-103
SLIDE 103

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-104
SLIDE 104

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-105
SLIDE 105

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-106
SLIDE 106

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

independence due to restriction to trees!

slide-107
SLIDE 107

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

independence due to restriction to trees!

slide-108
SLIDE 108

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-109
SLIDE 109

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-110
SLIDE 110

Non-subsampled High-Degree Node โ–ช w.h.p. has many subsampled neighbors โ–ช thus w.h.p. has at least one MIS neighbor โ–ช hence will be removed from the graph

Subsample-and-Conquer

Polynomial Degree Reduction:

Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โ–ช gather connected components โ–ช locally compute random 2-coloring โ–ช add a color class to MIS

slide-111
SLIDE 111

1) Shattering

break graph into small components

i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ€™16]

2) Post-Shattering

solve problem on remaining components

i) Gathering of Components ii) Local Computation

Algorithm Outline

Distributed Union-Find Iterated Subsample-and-Conquer

slide-112
SLIDE 112

Conclusion and Open Questions

slide-113
SLIDE 113

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
slide-114
SLIDE 114

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
slide-115
SLIDE 115

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
slide-116
SLIDE 116

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
slide-117
SLIDE 117

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
slide-118
SLIDE 118

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
  • ther graph problems?

more general graph families?

slide-119
SLIDE 119

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
  • ther graph problems?

more general graph families? MIS & Matching for locally sparse graphs in follow-up work [PODCโ€™19]

slide-120
SLIDE 120

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
  • ther graph problems?

more general graph families?

  • ther LOCAL techniques?

MIS & Matching for locally sparse graphs in follow-up work [PODCโ€™19]

slide-121
SLIDE 121

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
  • ther graph problems?

more general graph families?

  • ther LOCAL techniques?
  • ther approaches?

MIS & Matching for locally sparse graphs in follow-up work [PODCโ€™19]

slide-122
SLIDE 122

MODEL:

Sublinear-Memory MPC ๐‘‡ = เทจ ๐‘ƒ ๐‘œ๐œ€ local memory poly log log ๐‘œ rounds

APPROACH:

LOCAL algorithms & global communication

TECHNIQUE:

Shattering

PROBLEM:

MIS

  • n trees
  • ther graph problems?

more general graph families? MIS & Matching for locally sparse graphs in follow-up work [PODCโ€™19]

  • ther LOCAL techniques?

Thank you!

  • ther approaches?