hdfs under the hood
play

HDFS Under the Hood Sanjay Radia Sradia@yahoo-inc.com Grid - PowerPoint PPT Presentation

HDFS Under the Hood Sanjay Radia Sradia@yahoo-inc.com Grid Computing, Hadoop Yahoo Inc. Yahoo! Yahoo! 1 Outline Overview of Hadoop, an open source project Design of HDFS On going work Yahoo! 2 Hadoop Hadoop provides


  1. HDFS Under the Hood Sanjay Radia Sradia@yahoo-inc.com Grid Computing, Hadoop Yahoo Inc. Yahoo! Yahoo! 1

  2. Outline • Overview of Hadoop, an open source project • Design of HDFS • On going work Yahoo! 2

  3. Hadoop • Hadoop provides a framework for storing and processing petabytes of data using commodity hardware and storage • Storage: HDFS, HBase • Processing: MapReduce Pig MapReduce HBase HDFS Yahoo! 3

  4. Hadoop, an Open Source Project • Implemented in Java • Apache Top Level Project – http://hadoop.apache.org/core/ – Core (15 Committers) • HDFS • MapReduce – Hbase (3 Committers) • Community of contributors is growing – Though mostly Yahoo for HDFS and MapReduce – Powerset is leading the effort for HBase – Facebook is in process of opening Hive – Opportunities contributing to a major open source project Yahoo! 4

  5. Hadoop Users • Clusters from 1 to 2k nodes – Yahoo, Last.fm, Joost, Facebook, A9, … – In production use at Yahoo in multiple 2k clusters – Initial tests show that 0.18 will scale to 4K nodes - being validated • Broad academic interest – IBM/Google cloud computing initiative • A 40 node cluster + Xen based VMs … – CMU/Yahoo supercomputer cluster • M45 - 500 node cluster • Looking into making this more widely available to other universities • Hadoop Summit hosted by Yahoo! in march 2008 – Attracted over 400 attendees Yahoo! 5

  6. Hadoop Characteristics • Commodity HW + Horizontal scaling – Add inexpensive servers with JBODS – Storage servers and their disks are not assumed to be highly reliable and available • Use replication across servers to deal with unreliable storage/servers • Metadata-data separation - simple design – Storage scales horizontally – Metadata scales vertically (today) • Slightly Restricted file semantics – Focus is mostly sequential access – Single writers – No file locking features • Support for moving computation close to data – i.e. servers have 2 purposes: data storage and computation Simplicity of design why a small team could build such a large system in the first place Yahoo! 6

  7. File Systems Background(1) : Scaling Horizontally Scale namespace ops and IO Vertical Distributed FS Scaling Vertically Scale namespace ops Horizontally Scale IO and Storage Yahoo! 7

  8. File Systems Background(2) Federating • Andrew – Federated mount of file systems on /afs – (Plus follow on work on disconnected operations) • Newcastle connection – /.. Mounts • (plus remote Unix semantics) • Many others …. Yahoo! 8

  9. File systems Background (3) • Separation of metadata from data - 1978, 1980 – “Separating Data from Function in a Distributed File System” (1978) – by J E Israel, J G Mitchell, H E Sturgis – “A universal file server” (1980) by A D Birrell, R M Needham • + Horizontal scaling of storage nodes and io bandwidth – Several startups building scalable NFS – Luster – GFS – pNFS • + Commodity HW with JBODs, Replication, Non-posix semantics – GFS • + Computation close to the data – GFS/MapReduce Yahoo! 9

  10. Hadoop: Multiple FS implementations FileSystem is the interface for accessing the file system It has multiple implementations: • HDFS: “hdfs://” • Local file system: “file://” • Amazon S3: “s3://” – See Tom White’s writeup on using hadoop on EC2/S3 • Kosmos “kfs://” • Archive “har://” - 0.18 • You can set your default file system – so that your file names are simply /foo/bar/… • MapReduce uses the FileSystem interface - hence it can run on multiple file systems Yahoo! 10

  11. HDFS: Directories, Files & Blocks • Data is organized into files and directories • Files are divided into uniform sized blocks and distributed across cluster nodes • Blocks are replicated to handle hardware failure • HDFS keeps checksums of data for corruption detection and recovery • HDFS (& FileSystem) exposes block placement so that computation can be migrated to data Yahoo! 11

  12. HDFS Architecture (1) • Files broken into blocks of 128MB (per-file configurable) • Single Namenode – Manages the file namespace – File name to list blocks + location mapping – File metadata (i.e. “inode”) – Authorization and authentication – Collect block reports from Datanodes on block locations – Replicate missing blocks – Implementation detail: • Keeps ALL namespace in memory plus checkpoints & journal – 60M objects on 16G machine (e.g. 20M files with 2 blocks each) • Datanodes (thousands) handle block storage – Clients access the blocks directly from data nodes – Data nodes periodically send block reports to Namenode – Implementation detail: • Datanodes store the blocks using the underlying OS’s files Yahoo! 12

  13. HDFS Architecture (2) Metadata Metadata Log Namenode create getLocations addBlock getFileInfo Client Client copy blockReceived read write replicate b1 b3 b2 b4 b1 b3 b3 b2 b6 b2 b3 b5 b5 b5 b4 write write Yahoo! 13

  14. HDFS Architecture (3): Computation close to the data Hadoop Cluster Data Data data data data data Data data data data data DFS Block 1 DFS Block 1 Data data data data data Results DFS Block 1 Data data data data data MAP Data data data data data Data data data data DFS Block 2 Data data data data data Data data data data Data data data data Data data data data data DFS Block 2 MAP Data data data data Data data data data data Reduce Data data data data Data data data data data Data data data data DFS Block 2 Data data data data Data data data data data Data data data data Data data data data data Data data data data MAP Data data data data data DFS Block 3 DFS Block 3 DFS Block 3 Yahoo! 14

  15. Reads, Writes, Block Placement and Replication Reads • From the nearest replica Writes • Writes are pipelined to block replicas • Append is mostly in 0.18, will be completed in 0.19 – Hardest part is dealing with failures of DNs holding the replicas during an append • Generation number to deal with failures of DNs during write – Concurrent appends are likely to happen in the future • No plans to add random writes so far. Replication and block placement • A file’s replication factor can be changed dynamically (default 3) • Block placement is rack aware • Block under-replication & over-replication is detected by Namenode – triggers a copy or delete operation • Balancer application rebalances blocks to balance DN utilization Yahoo! 15

  16. Details (1): The Protocols • Client to Namenode – RPC • Client to Datanode – Streaming writes/reads • On reads data shipped directly from OS – Considering RPC for pread(offset, bytes) • Datanode to Namenode – RPC (heartbeat, blockReport …) – RPC Reply is the command from Namenode to Datanode (copy block …) • RPC – Not cross language but that was the goal (hence not RMI …) – 0.18 rpc is quite good • solves timeouts and manages queues/buffers and spikes well Yahoo! 16

  17. Current & near term work at Yahoo! (1) • HDFS – Improved RPC - 0.18 - made a significant improvement in scaling – Clean up the interfaces - 0.18, 0.19 – Improved Reliability & Availability - 0.17, 0.18, 0.19 • Current Uptime: 98.5% (includes planned downtime) • Data loss: A few data blks lost due to bugs and corruption, Never had any fsimage corruption – Append - 0.18, 0.19 – Authorization (Posix like permissions) 0.16 – Authentication - in progress, 0.20? – Performance - 0.17, 0.18, 0.19, … • 0.16 Performance – Reads within in Rack: 4 threads: 110MB/s, 1 Thread 40MB/s (buffer copies fixed in 0.17) – Writes within Rack: 4 threads 45MB/s, 1 Thread 21MB/s • Goal: Read/write at speed of network/disk with low CPU utilization – NN scaling – NN Replication and HA – Protocol/Interface versioning – Language agnostic RPC Yahoo! 17

  18. Current & near term work at Yahoo! (2) • MapReduce – New API using context objects - 0.19? – New resource manager/scheduler framework • Main goal - release resources not being used (e.g. during reduce phase) • Pluggable schedulers – Yahoo - Queues with guaranteed capacity + priorities + user quotas – Facebook - Queue per user? Yahoo! 18

  19. Not to scale Scaling the Name Service: Options # clients Good isolation 100x properties 50x Dynamic Partitioning Mountable 20x Partial Volumes NS in memory + Automount With mountable volume catalog NS volumes NS in memory in malloc hean + 4x Plus RO Replicas RO Replicas + Finer grain locks Separate Bmaps from NN All NS Partial 1x in memory NS (Cache) Archives in memory # names 20M 60M 400M 1000M 2000M+ Yahoo! 19

  20. Scaling the Name Service: Mountable Namespace Volumes with “Automounter” Yahoo! 20

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend