accelerating data management and processing on modern
play

Accelerating Data Management and Processing on Modern Clusters with - PowerPoint PPT Presentation

Accelerating Data Management and Processing on Modern Clusters with RDMA-Enabled Interconnects Keynote Talk at ADMS 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu


  1. Accelerating Data Management and Processing on Modern Clusters with RDMA-Enabled Interconnects Keynote Talk at ADMS 2014 by Dhabaleswar K. (DK) Panda The Ohio State University E-mail: panda@cse.ohio-state.edu http://www.cse.ohio-state.edu/~panda

  2. Introduction to Big Data Applications and Analytics • Big Data has become the one of the most important elements of business analytics • Provides groundbreaking opportunities for enterprise information management and decision making • The amount of data is exploding; companies are capturing and digitizing more information than ever • The rate of information growth appears to be exceeding Moore’s Law ADMS '14 2

  3. 4V Characteristics of Big Data • Commonly accepted 3V ’s of Big Data • Volume, Velocity, Variety Michael Stonebraker: Big Data Means at Least Three Different Things, http://www.nist.gov/itl/ssd/is/upload/NIST-stonebraker.pdf • 4/5V ’s of Big Data – 3V + * Veracity, *Value http://api.ning.com/files/tRHkwQN7s-Xz5cxylXG004GLGJdjoPd6bVfVBwvgu*F5MwDDUCiHHdmBW- JTEz0cfJjGurJucBMTkIUNdL3jcZT8IPfNWfN9/dv1.jpg ADMS '14 3

  4. Velocity of Big Data – How Much Data Is Generated Every Minute on the Internet? The global Internet population grew 6.59% from 2010 to 2011 and now represents 2.1 Billion People . http://www.domo.com/blog/2012/06/how-much-data-is-created-every-minute ADMS '14 4

  5. Data Management and Processing in Modern Datacenters • Substantial impact on designing and utilizing modern data management and processing systems in multiple tiers – Front-end data accessing and serving (Online) • Memcached + DB (e.g. MySQL), HBase – Back-end data analytics (Offline) • HDFS, MapReduce, Spark ADMS '14 5

  6. Overview of Web 2.0 Architecture and Memcached • Three-layer architecture of Web 2.0 – Web Servers, Memcached Servers, Database Servers • Memcached is a core component of Web 2.0 architecture ADMS '14 6

  7. Memcached Architecture • Distributed Caching Layer – Allows to aggregate spare memory from multiple nodes – General purpose • Typically used to cache database queries, results of API calls • Scalable model, but typical usage very network intensive ADMS '14 7

  8. HBase Overview • Apache Hadoop Database ( http://hbase.apache.org/ ) • Semi-structured database, which is highly scalable • Integral part of many datacenter applications – eg: Facebook Social Inbox • Developed in Java for platform- independence and portability (HBase Architecture) • Uses sockets for communication! ADMS '14 8

  9. Hadoop Distributed File System (HDFS) • Primary storage of Hadoop; highly reliable and fault-tolerant • Adopted by many reputed organizations Client RPC – eg: Facebook, Yahoo! RPC • NameNode: stores the file system namespace RPC • DataNode: stores data blocks RPC • Developed in Java for platform- independence and portability • Uses sockets for communication! (HDFS Architecture) ADMS '14 9

  10. Data Movement in Hadoop MapReduce Bulk Data Transfer Disk Operations • Map and Reduce Tasks carry out the total job execution – Map tasks read from HDFS, operate on it, and write the intermediate data to local disk – Reduce tasks get these data by shuffle from TaskTrackers, operate on it and write to HDFS • Communication in shuffle phase uses HTTP over Java Sockets ADMS '14 10

  11. Spark Architecture Overview • An in-memory data-processing framework – Iterative machine learning jobs – Interactive data analytics – Scala based Implementation – Standalone, YARN, Mesos • Scalable and communication intensive – Wide dependencies between Resilient Distributed Datasets (RDDs) – MapReduce-like shuffle http://spark.apache.org operations to repartition RDDs – Sockets based communication 11 ADMS '14

  12. Presentation Outline • Overview of Modern Clusters, Interconnects and Protocols • Challenges for Accelerating Data Management and Processing • The High-Performance Big Data (HiBD) Project • RDMA-based design for Memcached and HBase – RDMA-based Memcached – Case study with OLTP – SSD-assisted hybrid Memcached – RDMA-based HBase • RDMA-based designs for Apache Hadoop and Spark – Case studies with HDFS, MapReduce, and Spark – RDMA-based MapReduce on HPC Clusters with Lustre • Ongoing and Future Activities • Conclusion and Q&A ADMS '14 12

  13. High-End Computing (HEC): PetaFlop to ExaFlop 100 PFlops in 2015 1 EFlops in 2018? Expected to have an ExaFlop system in 2020-2022! ADMS '14 13

  14. Trends for Commodity Computing Clusters in the Top 500 List (http://www.top500.org) 500 100 Percentage of Clusters 450 90 Percentage of Clusters Number of Clusters 400 80 Number of Clusters 350 70 300 60 250 50 200 40 150 30 100 20 50 10 0 0 Timeline ADMS '14 14

  15. High End Computing (HEC) • High End Computing (HEC) grows dramatically – High Performance Computing – Big Data Computing • Technology Advancement – Multi-core/many-core technologies and accelerators – Remote Direct Memory Access (RDMA)-enabled networking (InfiniBand and RoCE) – Solid State Drives (SSDs) and Non-Volatile Random- Access Memory (NVRAM) – Accelerators (NVIDIA GPGPUs and Intel Xeon Phi) Tianhe – 2 (1) Stampede (6) Titan (2) Tianhe – 1A (10) ADMS '14 15

  16. Overview of High Performance Interconnects • High-Performance Computing (HPC) has adopted advanced interconnects and protocols – InfiniBand – 10 Gigabit Ethernet/iWARP – RDMA over Converged Enhanced Ethernet (RoCE) • Very Good Performance – Low latency (few micro seconds) – High Bandwidth (100 Gb/s with dual FDR InfiniBand) – Low CPU overhead (5-10%) • OpenFabrics software stack ( www.openfabrics.org) with IB, iWARP and RoCE interfaces are driving HPC systems • Many such systems in Top500 list ADMS '14 16

  17. All interconnects and protocols in OpenFabrics Stack Application / Middleware Application / Middleware Interface Sockets Verbs Protocol Kernel Space RSockets RDMA RDMA TCP/IP TCP/IP SDP TCP/IP Hardware User User User Ethernet User RDMA IPoIB Offload Space Space Space Driver Space Adapter Ethernet InfiniBand Ethernet InfiniBand InfiniBand InfiniBand iWARP RoCE Adapter Adapter Adapter Adapter Adapter Adapter Adapter Adapter Switch InfiniBand InfiniBand InfiniBand InfiniBand Ethernet Ethernet Ethernet Ethernet Switch Switch Switch Switch Switch Switch Switch Switch IPoIB 1/10/40 10/40 GigE- RSockets RoCE IB Native SDP iWARP GigE TOE ADMS '14 17

  18. Trends of Networking Technologies in TOP500 Systems Percentage share of InfiniBand is steadily increasing Interconnect Family – Systems Share ADMS '14 18

  19. Large-scale InfiniBand Installations • 223 IB Clusters (44.3%) in the June 2014 Top500 list (http://www.top500.org) • Installations in the Top 50 (25 systems): 519,640 cores (Stampede) at TACC (7 th ) 120, 640 cores (Nebulae) at China/NSCS (28 th ) 62,640 cores (HPC2) in Italy (11 th ) 72,288 cores (Yellowstone) at NCAR (29 th ) 147, 456 cores (Super MUC) in Germany (12 th ) 70,560 cores (Helios) at Japan/IFERC (30 th ) 76,032 cores (Tsubame 2.5) at Japan/GSIC (13 th ) 138,368 cores (Tera-100) at France/CEA (35 th ) 194,616 cores (Cascade) at PNNL (15 th ) 222,072 cores (QUARTETTO) in Japan (37 th ) 110,400 cores (Pangea) at France/Total (16 th ) 53,504 cores (PRIMERGY) in Australia (38 th ) 96,192 cores (Pleiades) at NASA/Ames (21 st ) 77,520 cores (Conte) at Purdue University (39th) 73,584 cores (Spirit) at USA/Air Force (24 th ) 44,520 cores (Spruce A) at AWE in UK (40 th ) 77,184 cores (Curie thin nodes) at France/CEA (26 h ) 48,896 cores (MareNostrum) at Spain/BSC (41 st ) 65,320-cores, iDataPlex DX360M4 at Germany/Max- and many more! Planck (27 th ) ADMS '14 19

  20. Open Standard InfiniBand Networking Technology • Introduced in Oct 2000 • High Performance Data Transfer – Interprocessor communication and I/O – Low latency (<1.0 microsec), High bandwidth (up to 12.5 GigaBytes/sec -> 100Gbps), and low CPU utilization (5-10%) • Flexibility for LAN and WAN communication • Multiple Transport Services – Reliable Connection (RC), Unreliable Connection (UC), Reliable Datagram (RD), Unreliable Datagram (UD), and Raw Datagram – Provides flexibility to develop upper layers • Multiple Operations – Send/Recv – RDMA Read/Write – Atomic Operations (very unique) • high performance and scalable implementations of distributed locks, semaphores, collective communication operations • Leading to big changes in designing HPC clusters, file systems, cloud computing systems, grid computing systems, …. ADMS '14 20

  21. Communication in the Channel Semantics (Send/Receive Model) Processor Processor Memory Memory Memory Memory Segment Segment Memory Segment Memory Segment Memory Segment QP QP Processor is involved only to: CQ CQ Send Recv Send Recv 1. Post receive WQE 2. Post send WQE 3. Pull out completed CQEs from the CQ InfiniBand Device InfiniBand Device Hardware ACK Receive WQE contains information on the receive Send WQE contains information about the buffer (multiple non-contiguous segments); send buffer (multiple non-contiguous Incoming messages have to be matched to a segments) receive WQE to know where to place ADMS '14 21

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend