π΅ machines Local Memory in MPC π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. often trivial usual assumption for most problems, only direct simulation of for many problems, often unrealistic LOCAL/PRAM algorithms βͺ ΰ·© admits O(1)-round algorithms O(π) prohibitively large known based on very simple βͺ sparse graphs trivial sampling approach Algorithms have been stuck at this linear-memory barrier! Lattanzi et al. [SPAAβ11]
π΅ machines Local Memory in MPC π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. often trivial usual assumption for most problems, only direct simulation of for many problems, often unrealistic LOCAL/PRAM algorithms βͺ ΰ·© admits O(1)-round algorithms O(π) prohibitively large known based on very simple βͺ sparse graphs trivial sampling approach Algorithms have been stuck at this linear-memory barrier! Lattanzi et al. [SPAAβ11] Fundamentally?
Breaking the Linear-Memory Barrier:
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory Ghaffari, Kuhn, Uitto [FOCSβ19] π΅ = π· π/π πΊ machines Conditional Lower Bound Ξ© log log π rounds poly lo log lo log π rounds
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: best we can hope for enhance LOCAL algorithms with global communication GKU [FOCSβ19] βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Breaking the Linear-Memory Barrier: Efficient MPC Graph Algorithms with Strongly Sublinear Memory π» = π· π πΊ local memory π΅ = π· π/π πΊ machines poly lo log lo log π rounds Imposed Locality: machines see only subset of nodes, regardless of sparsity of graph Our Approach to Cope with Locality: enhance LOCAL algorithms with global communication βͺ exponentially faster than LOCAL algorithms due to shortcuts βͺ polynomially less memory than most MPC algorithms
Problem: Maximal Independent Set (MIS)
Maximal Independent Set (MIS)
Maximal Independent Set (MIS)
Maximal Independent Set (MIS)
Maximal Independent Set (MIS) Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence
Maximal Independent Set (MIS) Independent Set: set of non-adjacent nodes Maximal: no node can be added without violating independence
π΅ machines MIS: State of the Art π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes.
π΅ machines MIS: State of the Art π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lattanzi et al. [SPAAβ11] π 1
π΅ machines MIS: State of the Art π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. Ghaffari et al. [PODCβ18] Lattanzi et al. [SPAAβ11] π(log log π) π 1
π΅ machines MIS: State of the Art π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyβs Algorithm Ghaffari et al. [PODCβ18] Lattanzi et al. [SPAAβ11] π(log π) π(log log π) π 1
π΅ machines MIS: State of the Art π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyβs Algorithm Ghaffari et al. [PODCβ18] Lattanzi et al. [SPAAβ11] π(log π) π(log log π) π 1 Ghaffari and Uitto [SODAβ19] ΰ·¨ π log π
π΅ machines MIS: State of the Art on Trees π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyβs Algorithm Ghaffari et al. [PODCβ18] Lattanzi et al. [SPAAβ11] π(log π) π(log log π) π 1 Ghaffari and Uitto [SODAβ19] ΰ·¨ π log π
π΅ machines MIS: State of the Art on Trees π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. Lubyβs Algorithm Trivial solution Ghaffari et al. [PODCβ18] Lattanzi et al. [SPAAβ11] Trivial solution π(log π) π(log log π) π 1 π 1 π 1 Ghaffari and Uitto [SODAβ19] ΰ·¨ π log π
π΅ machines MIS: State of the Art on Trees π» memory per machine π΅ β π» = ΰ·© π· (π + π) S ΰ·© ΰ·© ΰ·© O π π Ξ© π 1+π Ξ(π) Strongly Sublinear Memory: Linear Memory: Superlinear Memory: π π π , 0 β€ π < 1 π π 1+π , 0 < π β€ 1 π = ΰ·¨ π = ΰ·¨ π = ΰ·¨ π π Machines see all nodes. No machine sees all nodes. Machines see all nodes. Our Result Lubyβs Algorithm Trivial solution Ghaffari et al. [PODCβ18] Lattanzi et al. [SPAAβ11] Trivial solution π log 3 log π π(log π) π 1 π(log log π) π 1 π 1 Ghaffari and Uitto [SODAβ19] ΰ·¨ π log π
Our Result π·(π¦π©π‘ π π¦π©π‘ π) -round MPC algorithm π· π πΊ memory that w.h.p. computes MIS on trees. with π = ΰ·©
Our Result ΰ·© π· π¦π©π‘ π rounds π· π πΊ memory π = ΰ·© Ghaffari and Uitto [SODAβ19] π·(π¦π©π‘ π π¦π©π‘ π) -round MPC algorithm π· π πΊ memory that w.h.p. computes MIS on trees. with π = ΰ·©
Our Result ΰ·© π· π¦π©π‘ π¦π©π‘ π rounds π· π¦π©π‘ π rounds π = ΰ·© π· π πΊ memory π = ΰ·© π· π memory Ghaffari et al. [PODCβ18] Ghaffari and Uitto [SODAβ19] π·(π¦π©π‘ π π¦π©π‘ π) -round MPC algorithm π· π πΊ memory that w.h.p. computes MIS on trees. with π = ΰ·©
Our Result ΰ·© π· π¦π©π‘ π¦π©π‘ π rounds π· π¦π©π‘ π rounds π = ΰ·© π· π πΊ memory π = ΰ·© π· π memory Ghaffari et al. [PODCβ18] Ghaffari and Uitto [SODAβ19] π·(π¦π©π‘ π π¦π©π‘ π) -round MPC algorithm π· π πΊ memory that w.h.p. computes MIS on trees. with π = ΰ·© Conditional π(π¦π©π‘ π¦π©π‘ π) -round lower bound for π = ΰ·© π· π πΊ Ghaffari, Kuhn, and Uitto [FOCSβ19]
Algorithm
Algorithm Outline
Algorithm Outline
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAβ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction main LOCAL technique ii) LOCAL Shattering Ghaffari [SODAβ16] Beck [RSAβ91] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAβ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAβ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Algorithm Outline 1) Shattering break graph into small components i) Degree Reduction ii) LOCAL Shattering Ghaffari [SODAβ16] 2) Post-Shattering solve problem on remaining components i) Gathering of Components ii) Local Computation
Polynomial Degree Reduction: Subsample-and-Conquer
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring
Polynomial Degree Reduction: Subsample-and-Conquer Subsample subsample nodes independently Conquer compute random MIS in subsampled graph βͺ gather connected components βͺ locally compute random 2-coloring βͺ add a color class to MIS
Recommend
More recommend