hadoop distributed file system hdfs
play

Hadoop Distributed File System (HDFS) 10/05/2018 1 HDFS Overview - PowerPoint PPT Presentation

Hadoop Distributed File System (HDFS) 10/05/2018 1 HDFS Overview A distributed file system Built on the architecture of Google File System (GS) Shares a similar architecture to many other common distributed storage engines such as Amazon


  1. Hadoop Distributed File System (HDFS) 10/05/2018 1

  2. HDFS Overview A distributed file system Built on the architecture of Google File System (GS) Shares a similar architecture to many other common distributed storage engines such as Amazon S3 and Microsoft Azure HDFS is a stand-along storage engine and can be used in isolation of the query processing engine 10/05/2018 2

  3. HDFS Architecture Name node Data nodes B B B B B B B B B B B B B B B 10/05/2018 3

  4. What is where? File and directory names Block ordering and locations Name node Capacity of data nodes Architecture of data nodes Data nodes Block data Name node location B B B B B B B B B B B B B B B 10/05/2018 4

  5. Analogy to Unix FS The logical view is similar mary user chu / etc hadoop 10/05/2018 5

  6. Analogy to Unix FS The physical model is comparable List of iNodes List of block locations File1 Meta data File1 Block 1 Block 2 Block 3 B B B B B B B B B … B B B B B B Unix HFDS 10/05/2018 6

  7. HDFS Create Name node File creator Data nodes 10/05/2018 7

  8. HDFS Create Name node Create(…) File creator Data nodes The creator process calls the create function which translates to an RPC call at the name node 10/05/2018 8

  9. HDFS Create Name node Create(…) File creator Data nodes The master node creates three initial blocks 1. First block is assigned to a random machine 1 2 3 2. Second block is assigned to another random machine in the same rack of the first machine 3. Third block is assigned to a random machine in another rack 10/05/2018 9

  10. HDFS Create Name node OutputStream File creator Data nodes 1 2 3 10/05/2018 10

  11. HDFS Create Name node File creator Data nodes OutputStream#write 1 2 3 10/05/2018 11

  12. HDFS Create Name node File creator Data nodes OutputStream#write 1 2 3 10/05/2018 12

  13. HDFS Create Name node File creator Data nodes OutputStream#write 1 2 3 10/05/2018 13

  14. HDFS Create Name node Next block File creator Data nodes OutputStream#write 1 2 3 When a block is filled up, the creator contacts the name node to create the next block 10/05/2018 14

  15. Notes about writing to HDFS Data transfers of replicas are pipelined The data does not go through the name node Random writing is not supported Appending to a file is supported but it creates a new block 10/05/2018 15

  16. Self-writing Name node If the file creator is running on one of the data nodes, the first replica is always assigned to that node Data nodes File creator 10/05/2018 16

  17. Reading from HDFS Reading is relatively easier No replication is needed Replication can be exploited Random reading is allowed 10/05/2018 17

  18. HDFS Read Name node open(…) File reader Data nodes The reader process calls the open function which translates to an RPC call at the name node 10/05/2018 18

  19. HDFS Read Name node InputStream File reader Data nodes The name node locates the first block of that file and returns the address of one of the nodes that store that block The name node returns an input stream for the file 10/05/2018 19

  20. HDFS Read Name node File reader InputStream#read (…) Data nodes 10/05/2018 20

  21. HDFS Read Name node Next block File reader When an end-of-block is Data nodes reached, the name node locates the next block 10/05/2018 21

  22. HDFS Read Name node seek(pos) File reader InputStream#seek operation locates Data nodes a block and positions the stream accordingly 10/05/2018 22

  23. Self-reading Name node Open, seek 1. If the block is locally stored on the reader, this replica is Data nodes chosen to read 2. If not, a replica on another machine in the same rack is File chosen reader 3. Any other random block is chosen When self-reading occurs, HDFS can make it much faster through a feature called short-circuit 10/05/2018 23

  24. Notes About Reading The API is much richer than the simple open/seek/close API You can retrieve block locations You can choose a specific replica to read The same API is generalized to other file systems including the local FS and S3 Review question: Compare random access read in local file systems to HDFS 10/05/2018 24

  25. HDFS Special Features Node decomission Load balancer Cheap concatenation 10/05/2018 25

  26. Node Decommission B B B B B B B B B B B B B B B B B B B 10/05/2018 26

  27. Load Balancing B B B B B B B B B B B B B B B 10/05/2018 27

  28. Load Balancing B B B B B B B B B B B B B B B Start the load balancer 10/05/2018 28

  29. Cheap Concatenation File 1 File 2 File 3 Name node Concatenate File 1 + File 2 + File 3  File 4 Rather than creating new blocks, HDFS can just change the metadata in the name node to delete File 1, File 2, and File 3, and assign their blocks to a new File 4 in the right order. 10/05/2018 29

  30. HDFS API FileSystem LocalFileSystem DistributedFileSystem S3FileSystem Path Configuration 10/05/2018 30

  31. HDFS API Create the file system Configuration conf = new Configuration(); Path path = new Path(“…”); FileSystem fs = path.getFileSystem(conf); // To get the local FS fs = FileSystem.getLocal (conf); // To get the default FS fs = FileSystem.get(conf); 10/05/2018 31

  32. HDFS API Create a new file FSDataOutputStream out = fs.create (path, …); Delete a file fs.delete(path, recursive); fs.deleteOnExit(path); Rename a file fs.rename(oldPath, newPath); 10/05/2018 32

  33. HDFS API Open a file FSDataInputStream in = fs.open (path, …); Seek to a different location in.seek(pos); in.seekToNewSource(pos); 10/05/2018 33

  34. HDFS API Concatenate fs.concat(destination, src[]); Get file metadata fs.getFileStatus(path); Get block locations fs.getFileBlockLocations(path, from, to); 10/05/2018 34

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend