Breaking the Linear-Memory Barrier in Massively Parallel Computing
MIS on Trees with Strongly Sublinear Memory
Sebastian Brandt, Man anuela la Fis Fischer, Jara Uitto ETH Zurich
Breaking the Linear-Memory Barrier in Massively Parallel Computing - - PowerPoint PPT Presentation
Breaking the Linear-Memory Barrier in Massively Parallel Computing MIS on Trees with Strongly Sublinear Memory Sebastian Brandt, Man anuela la Fis Fischer, Jara Uitto ETH Zurich Model: Massively Parallel Computing (MPC) Model: Massively
MIS on Trees with Strongly Sublinear Memory
Sebastian Brandt, Man anuela la Fis Fischer, Jara Uitto ETH Zurich
parallel computing framework inspired by MapReduce
parallel computing framework inspired by MapReduce
Karloff, Suri, Vassilvitskii [SODAโ10]
parallel computing framework inspired by MapReduce
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐)
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines Complexity: number of rounds
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines Complexity: number of rounds
Synchronous Rounds
at every machine
between machines Complexity: number of rounds ๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐)
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Synchronous Rounds
at every machine
between machines Complexity: number of rounds
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐)
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐)
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐)
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) usual assumption unrealistic โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11]
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11] for most problems,
LOCAL/PRAM algorithms known
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11] for most problems,
LOCAL/PRAM algorithms known
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes. Algorithms have been stuck at this linear-memory barrier!
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11] for most problems,
LOCAL/PRAM algorithms known
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. usual assumption
โช เทฉ O(๐) prohibitively large โช sparse graphs trivial Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes. Algorithms have been stuck at this linear-memory barrier! Fundamentally?
for many problems, admits O(1)-round algorithms based on very simple sampling approach Lattanzi et al. [SPAAโ11] for most problems,
LOCAL/PRAM algorithms known
Breaking the Linear-Memory Barrier:
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier:
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Ghaffari, Kuhn, Uitto [FOCSโ19] Conditional Lower Bound ฮฉ log log ๐ rounds
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
best we can hope for GKU [FOCSโ19]
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: ๐ป = ๐ท ๐๐บ local memory ๐ต = ๐ท ๐/๐๐บ machines poly lo log lo log ๐ rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication โช exponentially faster than LOCAL algorithms due to shortcuts โช polynomially less memory than most MPC algorithms
Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence
Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Lattanzi et al. [SPAAโ11] ๐ 1 Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Ghaffari et al. [PODCโ18] ๐(log log ๐) Lattanzi et al. [SPAAโ11] ๐ 1 Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Lubyโs Algorithm ๐(log ๐) Ghaffari et al. [PODCโ18] ๐(log log ๐) Lattanzi et al. [SPAAโ11] ๐ 1 Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Lubyโs Algorithm ๐(log ๐) Ghaffari et al. [PODCโ18] ๐(log log ๐) Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐ Lattanzi et al. [SPAAโ11] ๐ 1 Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Lubyโs Algorithm ๐(log ๐) Ghaffari et al. [PODCโ18] ๐(log log ๐) Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐ Lattanzi et al. [SPAAโ11] ๐ 1 Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Lubyโs Algorithm ๐(log ๐) Ghaffari et al. [PODCโ18] ๐(log log ๐) Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐ Lattanzi et al. [SPAAโ11] ๐ 1 Trivial solution ๐ 1 Trivial solution ๐ 1 Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
เทฉ ฮ(๐) เทฉ ฮฉ ๐1+๐ เทฉ O ๐๐
S
๐ต machines ๐ป memory per machine ๐ต โ ๐ป = เทฉ ๐ท (๐ + ๐) Strongly Sublinear Memory: ๐ = เทจ ๐ ๐๐ , 0 โค ๐ < 1 No machine sees all nodes. Linear Memory: ๐ = เทจ ๐ ๐ Machines see all nodes. Lubyโs Algorithm ๐(log ๐) Ghaffari et al. [PODCโ18] ๐(log log ๐) Ghaffari and Uitto [SODAโ19] เทจ ๐ log ๐ Lattanzi et al. [SPAAโ11] ๐ 1 Trivial solution ๐ 1 Our Result ๐ log3 log ๐ Trivial solution ๐ 1 Superlinear Memory: ๐ = เทจ ๐ ๐1+๐ , 0 < ๐ โค 1 Machines see all nodes.
๐ท(๐ฆ๐ฉ๐ก๐ ๐ฆ๐ฉ๐ก ๐)-round MPC algorithm with ๐ = เทฉ ๐ท ๐๐บ memory that w.h.p. computes MIS on trees.
๐ท(๐ฆ๐ฉ๐ก๐ ๐ฆ๐ฉ๐ก ๐)-round MPC algorithm with ๐ = เทฉ ๐ท ๐๐บ memory that w.h.p. computes MIS on trees. เทฉ ๐ท ๐ฆ๐ฉ๐ก ๐ rounds ๐ = เทฉ ๐ท ๐๐บ memory
Ghaffari and Uitto [SODAโ19]
๐ท(๐ฆ๐ฉ๐ก๐ ๐ฆ๐ฉ๐ก ๐)-round MPC algorithm with ๐ = เทฉ ๐ท ๐๐บ memory that w.h.p. computes MIS on trees. เทฉ ๐ท ๐ฆ๐ฉ๐ก ๐ rounds ๐ = เทฉ ๐ท ๐๐บ memory
Ghaffari and Uitto [SODAโ19]
๐ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐ rounds ๐ = เทฉ ๐ท ๐ memory
Ghaffari et al. [PODCโ18]
๐ท(๐ฆ๐ฉ๐ก๐ ๐ฆ๐ฉ๐ก ๐)-round MPC algorithm with ๐ = เทฉ ๐ท ๐๐บ memory that w.h.p. computes MIS on trees. เทฉ ๐ท ๐ฆ๐ฉ๐ก ๐ rounds ๐ = เทฉ ๐ท ๐๐บ memory
Ghaffari and Uitto [SODAโ19]
๐ท ๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐ rounds ๐ = เทฉ ๐ท ๐ memory
Ghaffari et al. [PODCโ18]
Conditional ๐(๐ฆ๐ฉ๐ก ๐ฆ๐ฉ๐ก ๐)-round lower bound for ๐ = เทฉ ๐ท ๐๐บ
Ghaffari, Kuhn, and Uitto [FOCSโ19]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
main LOCAL technique Beck [RSAโ91]
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
Polynomial Degree Reduction:
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
independence due to restriction to trees!
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
independence due to restriction to trees!
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
Non-subsampled High-Degree Node โช w.h.p. has many subsampled neighbors โช thus w.h.p. has at least one MIS neighbor โช hence will be removed from the graph
Polynomial Degree Reduction:
Subsample subsample nodes independently Conquer compute random MIS in subsampled graph โช gather connected components โช locally compute random 2-coloring โช add a color class to MIS
1) Shattering
break graph into small components
i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAโ16]
2) Post-Shattering
solve problem on remaining components
i) Gathering of Components ii) Local Computation
Distributed Union-Find Iterated Subsample-and-Conquer
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
more general graph families?
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
more general graph families? MIS & Matching for locally sparse graphs in follow-up work [PODCโ19]
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
more general graph families?
MIS & Matching for locally sparse graphs in follow-up work [PODCโ19]
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
more general graph families?
MIS & Matching for locally sparse graphs in follow-up work [PODCโ19]
MODEL:
Sublinear-Memory MPC ๐ = เทจ ๐ ๐๐ local memory poly log log ๐ rounds
APPROACH:
LOCAL algorithms & global communication
TECHNIQUE:
Shattering
PROBLEM:
MIS
more general graph families? MIS & Matching for locally sparse graphs in follow-up work [PODCโ19]
Thank you!