pregelix big ger graph analytics on a dataflow engine
play

Pregelix: Big(ger) Graph Analytics on A Dataflow Engine Yingyi Bu - PowerPoint PPT Presentation

Pregelix: Big(ger) Graph Analytics on A Dataflow Engine Yingyi Bu (UC Irvine) Joint work with: Vinayak Borkar (UC Irvine) , Michael J. Carey (UC Irvine), Tyson Condie (UCLA), Jianfeng Jia (UC Irvine) Outline Introduction Pregel Semantics


  1. Pregelix: Big(ger) Graph Analytics on A Dataflow Engine Yingyi Bu (UC Irvine) Joint work with: Vinayak Borkar (UC Irvine) , Michael J. Carey (UC Irvine), Tyson Condie (UCLA), Jianfeng Jia (UC Irvine)

  2. Outline Introduction Pregel Semantics The Pregel Logical Plan The Pregelix System Experimental Results Related Work Conclusions

  3. Introduction Big Graphs are becoming common ○ web graph ○ social network ○ ......

  4. Introduction ● How Big are Big Graphs? ○ Web: 8.53 Billion pages in 2012 ○ Facebook active users: 1.01 Billion ○ de Bruijn graph: 3 Billion nodes ○ ...... ● Weapons for mining Big Graphs ○ Pregel (Google) ○ Giraph (Facebook, LinkedIn, Twitter, etc.) ○ Distributed GraphLab (CMU) ○ GraphX (Berkeley)

  5. Programming Model ● Think like a vertex ○ receive messages ○ update states ○ send messages

  6. Programming Model ● Vertex public abstract class Vertex <I extends WritableComparable, V extends Writable, E extends Writable, M extends Writable> implements Writable{ public abstract void compute ( Iterator<M> incomingMessages ); ....... } ● Helper methods ○ sendMsg(I vertexId, M msg) ○ voteToHalt() ○ getSuperstep()

  7. More APIs ● Message Combiner ○ Combine messages ○ Reduce network traffic ● Global Aggregator ○ Aggregate statistics over all live vertices ○ Done for each iteration ● Graph Mutations ○ Add vertex ○ Delete vertex ○ A conflict resolution function

  8. Pregel Semantics ● Bulk-synchronous ○ A global barrier between iterations ● Compute invocation ○ Once per active vertex in each superstep ○ A halted vertex is activated when receiving messages ● Global halting ○ Each vertex is halted ○ No messages are in flight ● Graph mutations ○ Partial ordering of operations ○ User-defined resolve function

  9. Process-centric runtime superstep:3 master halt: false Vertex { id: 1 Vertex { id: 2 halt: false halt: false value: 3.0 value: 2.0 edges: (3,1.0), edges: (3,1.0), (4,1.0) (4,1.0) } } Vertex { id: 3 Vertex{ id: 4 halt: false <3, 1.0> <2, 3.0> halt: false value: 3.0 value: 1.0 edges: (2,1.0), <4, 3.0> edges: (1,1.0) (3,1.0) } } <5, 1.0> worker-1 worker-2 message <id, payload> control signal

  10. Issues and Opportunities ● Out-of-core support “I’m trying to run the sample connected components algorithm on a large data set on a cluster, but I get a “java.lang.OutOfMemoryError: Java heap space” error.” 26 similar threads on Giraph-users mailing list during the past year!

  11. Issues and Opportunities ● Physical flexibility ○ PageRank, SSSP, CC, Triangle Counting ○ Web graph, social network, RDF graph ○ 8 machine school cluster, 200 machine Facebook data center One-size fits-all?

  12. Issues and Opportunities ● Software simplicity Pregel Vertex/map/msg data structures GraphLab Giraph Hama Task scheduling ...... Memory management Message delivery Network management

  13. The Pregelix Approach Relation Schema Vertex (vid, halt, value, edges) Msg (vid, payload) (halt, aggregate, superstep) GS vid payload 3.0 2 4 3.0 5 1.0 1 1.0 msg value vid halt edges vid=vid Msg 3.0 (3,1.0),(4,1.0) 1 NULL false false 3 3.0 1.0 (2,1.0),(3,1.0) edges vid halt value 5 1.0 NULL NULL NULL (3,1.0),(4,1.0) 2 2.0 false 2.0 (3,1.0),(4,1.0) 2 false 4 false 1.0 3.0 (1,1.0) 4 1.0 (1,1.0) 1 false 3.0 (3,1.0),(4,1.0) 3.0 false 3 false 3.0 (2,1.0),(3,1.0) Vertex

  14. Pregel UDFs ● compute ○ Executed at each active vertex in each superstep ● combine ○ Aggregation function for messages ● aggregate ○ Aggregate function for the global states ● resolve ○ Used to resolve graph mutations

  15. Logical Plan D7 Vertex i+1 vid combine Msg i+1 Flow Data D3 D2 D2 Vertex tuples D4,D5,D6 … D3 Msg tuples UDF Call ( compute ) D7 Msg tuples D1 after combination ( V.halt =false || M.payload != NULL ) M.vid=V.vid Msg i (M) Vertex i (V)

  16. Logical Plan Flow Data GS i+1 D4 The global halting state contribution D10 D8 D9 superstep=G.superstep+1 D5 Values for aggregate Agg(aggregate) Agg(bool-and) D8 The global halt state D4 D5 D2,D3,D6 … D9 The global aggregate value UDF Call ( compute ) GS i (G) D10 The increased superstep D1 Vertex i+1 Flow Data D6 Vertex tuples for deletions and vid ( resolve ) insertions D6 D2,D3,D4,D5 … UDF Call ( compute ) D1

  17. The Pregelix System Pregel Physical Plans Vertex/map/msg data structures Operators Access methods Task scheduling Record/Index Task scheduling management Memory management Buffer Data exchanging management Message delivery Connection management Network management A general purpose parallel dataflow engine

  18. The Runtime ● Runtime Choice? Hyracks Hadoop ● The Hyracks data-parallel execution engine ○ Out-of-core operators ○ Connectors ○ Access methods ○ User-configurable task scheduling ○ Extensibility

  19. Parallelism msg halt value vid edges msg halt value edges vid 3.0 1 NULL false (3,1.0),(4,1.0) (3,1.0),(4,1.0) 2 3.0 2.0 false false 3 3.0 1.0 (2,1.0),(3,1.0) 4 3.0 false 1.0 (1,1.0) 5 1.0 NULL NULL NULL vid=vid vid=vid vid msg edges msg edges vid vid halt value halt value vid 3.0 2 2.0 (3,1.0) (4,1.0) 1.0 1 3.0 (3,1.0) (4,1.0) 2 false 3 false 4 3.0 4 false 1.0 (1,1.0) 5 3 false 3.0 (2,1.0),(3,1.0) 1.0 Msg-1 Msg-2 Vertex-1 Vertex-2 msg vid vid msg 1.0 3.0 3 2 4 3.0 5 1.0 output-Msg-2 output-Msg-1 Worker-1 Worker-2

  20. Physical Choices ● Vertex storage B-Tree LSM B-Tree ● Group-by ○ Pre-clustered group-by ○ Sort-based group-by ○ HashSort group-by ● Data redistribution ○ m-to-n merging partitioning connector ○ m-to-n partitioning connector ● Join ○ Index Full outer join ○ Index Left outer join

  21. Data Storage ● Vertex ○ Partitioned B-tree or LSM B-tree ● Msg ○ Partitioned local files, sorted ● GS ○ Stored on HDFS ○ Cached in each worker

  22. Physical Plan: Message Combination vid combine vid combine vid combine vid combine vid combine vid combine (Sort-based) (Sort-based) (Sort-based) (HashSort) (HashSort) (HashSort) vid combine vid combine vid combine vid combine vid combine vid combine (Sort-based) (Sort-based) (HashSort) (Sort-based) (HashSort) (HashSort) Sort-Groupby-M-to-N-Partitioning HashSort-Groupby-M-to-N-Partitioning vid combine vid combine vid combine vid combine vid combine vid combine (Preclustered) (Preclustered) (Preclustered) (Preclustered) (Preclustered) (Preclustered) vid combine vid combine vid combine vid combine vid combine vid combine (Sort-based) (Sort-based) (HashSort) (HashSort) (Sort-based) (HashSort) Sort-Groupby-M-to-N-Merge-Partitioning HashSort-Groupby-M-to-N-Merge-Partitioning M-to-N Partitioning Connector M-To-N Partitioning Merging Connector

  23. Physical Plan: Message Delivery … D12 Function Call ( NullMsg ) D2 -- D6 Vid i+1 ( halt = false ) UDF Call ( compute ) … D1 D11 D2 -- D6 UDF Call ( compute ) (V. halt = false || M. paylod != NULL ) D1 Index Left Outer Index Full Outer Join Join Merge ( choose() ) M.vid=V.vid M.vid=I.vid M.vid=V.vid Msg i (M) Vertex i (V) Msg i (M) Vid i (I) Vertex i (V)

  24. Caching Pregel, Giraph, GraphLab all have caches for this kind of iterative jobs. What do you do for caching? ● Iteration-aware (sticky) scheduling? ○ 1 Loc: location constraints ● Caching of invariant data? ○ B-tree buffer pool -- customized flushing policy: never flush dirty pages ○ File system cache -- free

  25. Experimental Results ● Setup ○ Machines a UCI cluster ~ 32 machines 4 cores, 8GB memory, 2 disk drives. ○ Datasets ■ Yahoo! webmap (1,413,511,393 vertice, adjacency list, ~70GB) and its samples. ■ The Billions of Tuples Challenge dataset (172,655,479 vertices, adjacency list, ~17GB), its samples, and its scale-ups. ○ Giraph ■ Latest trunk (revision 770) ■ 4 vertex computation threads, 8GB JVM heap

  26. Execution Time In-memory In-memory Out-of-core Out-of-core

  27. Execution Time In-memory In-memory Out-of-core Out-of-core

  28. Execution Time In-memory In-memory Out-of-core Out-of-core

  29. Parallel Speedup

  30. Parallel Scale-up

  31. Throughput

  32. Plan Flexibility In-memory Out-of-core 15x

  33. Software Simplicity ● Lines-of-Code ○ Giraph: 32,197 ○ Pregelix: 8,514

  34. More systems

  35. More Systems

  36. Related Work ● Parallel Data Management ○ Gama, GRACE, Teradata ○ Stratosphere (TU Berlin) ○ REX (UPenn) ○ AsterixDB (UCI) ● Big Graph Processing Systems ○ Pregel (Google) ○ Giraph (Facebook, LinkedIn, Twitter, etc.) ○ Distributed GraphLab (CMU) ○ GraphX (Berkeley) ○ Hama (Sogou, etc.) --- Too slow!

  37. Conclusions ● Pregelix offers: ○ Transparent out-of-core support ○ Physical flexibility ○ Software simplicity ● We target Pregelix to be an open-source production system, rather than just a research prototype: ○ http://pregelix.ics.uci.edu

  38. Q & A

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend