latency trumps all
play

Latency Trumps All Chris Saari twitter.com/chrissaari - PowerPoint PPT Presentation

Latency Trumps All Chris Saari twitter.com/chrissaari blog.chrissaari.com saari@yahoo-inc.com Thursday, November 19, 2009 Packet Latency Time for a packet to get between points A and B Physical distance + time queued in devices along


  1. Latency Trumps All Chris Saari twitter.com/chrissaari blog.chrissaari.com saari@yahoo-inc.com Thursday, November 19, 2009

  2. Packet Latency � Time for a packet to get between points A and B � Physical distance + time queued in devices along the way ~60ms Thursday, November 19, 2009

  3. ... Thursday, November 19, 2009

  4. Anytime... � ... the system is waiting for data � The system is end to end - Human response time - Network card buffering - System bus/interconnect speed - Interrupt handling - Network stacks - Process scheduling delays - Application process waiting for data from memory to get to CPU, or from disk to memory to CPU - Routers, modems, last mile speeds - Backbone speed and operating condition - Inter-cluster/colo performance Thursday, November 19, 2009

  5. Big Picture k k s i r D o w t e N CPU User Memory Thursday, November 19, 2009

  6. Tubes? Thursday, November 19, 2009

  7. Latency vs. Bandwidth Bandwidth Bits / Second Latency Time Thursday, November 19, 2009

  8. Bandwidth of a Truck Full of Tape Thursday, November 19, 2009

  9. Latency Lags Bandwidth -David Patterson f r- al e d s n f s s- a ts e r n s; n - r s- t- r t - Thursday, November 19, 2009

  10. The Problem � Relative Data Access Latencies, Fastest to Slowest - CPU Registers (1) - L1 Cache (1-2) - L2 Cache (6-10) - Main memory (25-100) --- don’t cross this line, don’t go off mother board! --- - Hard drive (1e7) - LAN (1e7-1e8) - WAN (1e9-2e9) Thursday, November 19, 2009

  11. Relative Data Access Latency Fast Slow CPU Register L1 L2 RAM Thursday, November 19, 2009

  12. Relative Data Access Latency Fast Slow CPU Register L1 L2 RAM Hard Disk Thursday, November 19, 2009

  13. Relative Data Access Latency Lower Higher Register L1 L2 RAM Hard Disk LANFloppy/CD-ROM WAN Thursday, November 19, 2009

  14. CPU Register � CPU Register Latency - Average Human Height Thursday, November 19, 2009

  15. L1 Cache Thursday, November 19, 2009

  16. L2 Cache x 10 x 6 Thursday, November 19, 2009

  17. RAM x 100 x 25 to Thursday, November 19, 2009

  18. Hard Drive 0.4 x equatorial circumference of Earth x 10 M Thursday, November 19, 2009

  19. WAN x 100 M 0.42 x Earth to Moon Distance Thursday, November 19, 2009

  20. To experience pain... � Mobile phone network latency 2-10x that of wired - iPhone 3G 500ms ping x 500 M 2 x Earth to Moon Distance Thursday, November 19, 2009

  21. 500ms isn’t that long... Thursday, November 19, 2009

  22. Google SPDY “It is designed specifically for minimizing latency through features such as multiplexed streams, request prioritization and HTTP header compression.” Thursday, November 19, 2009

  23. Strategy Pattern: Move Data Up � Relative Data Access Latencies - CPU Registers (1) - L1 Cache (1-2) - L2 Cache (6-10) - Main memory (25-50) - Hard drive (1e7) - LAN (1e7-1e8) - WAN (1e9-2e9) Thursday, November 19, 2009

  24. Batching: Do it Once Thursday, November 19, 2009

  25. Batching: Maximize Data Locality Thursday, November 19, 2009

  26. Let’s Dig In � Relative Data Access Latencies, Fastest to Slowest - CPU Registers (1) - L1 Cache (1-2) - L2 Cache (6-10) - Main memory (25-100) - Hard drive (1e7) - LAN (1e7-1e8) - WAN (1e9-2e9) Thursday, November 19, 2009

  27. Network � If you can’t Move Data Up, minimize accesses Thursday, November 19, 2009

  28. Network � If you can’t Move Data Up, minimize accesses � Souders Performance Rules � 1) Make fewer HTTP requests - Avoid going halfway to the moon whenever possible Thursday, November 19, 2009

  29. Network � If you can’t Move Data Up, minimize accesses � Souders Performance Rules � 1) Make fewer HTTP requests - Avoid going halfway to the moon whenever possible � 2) Use a content delivery network - Edge caching gets data physically closer to the user Thursday, November 19, 2009

  30. Network � If you can’t Move Data Up, minimize accesses � Souders Performance Rules � 1) Make fewer HTTP requests - Avoid going halfway to the moon whenever possible � 2) Use a content delivery network - Edge caching gets data physically closer to the user � 3) Add an expires header - Instead of going halfway to the moon (Network), climb Godzilla (RAM) or go 40% of the way around the Earth (Disk) instead Thursday, November 19, 2009

  31. Network: Packets and Latency Less data = less packets = less packet loss = less latency Thursday, November 19, 2009

  32. Network � 1) Make fewer HTTP requests � 2) Use a content delivery network � 3) Add an expires header � 4) Gzip components Thursday, November 19, 2009

  33. Disk: Falling of the Latency Cliff Thursday, November 19, 2009

  34. Jim Gray, Microsoft 2006 Tape is Dead Disk is Tape Flash is Disk RAM Locality is King Thursday, November 19, 2009

  35. Strategy: Move Up: Disk to RAM � RAM gets you above the exponential latency line - Linear cost and power consumption = $$$ Main memory (25-50) Hard drive (1e7) Thursday, November 19, 2009

  36. Strategy: Avoidance: Bloom Filters - Probabilistic answer to question if a member is in a set - Constant time via multiple hashes - Constant space bit string - Used in BigTable, Cassandra, Squid Thursday, November 19, 2009

  37. In Memory Indexes � Haystack keeps file system indexes in RAM - Cut disk access per image from 3 to 1 � Search index compression � GFS master node prefix compression of names Thursday, November 19, 2009

  38. Managing Gigabytes -Witten, Moffat, and Bell Thursday, November 19, 2009

  39. SSDs Disk SSD ~ 180 - 200 (15K RPM) I/O Ops / Sec ~ 10K - 100K ~ 70 - 100 Seek times ~ 7 - 3.2 ms ~ 0.085 - 0.05 ms SSDs < 1/5th power consumption of spinning disk Thursday, November 19, 2009

  40. Sequential vs. Random Disk Access - James Hamilton Thursday, November 19, 2009

  41. 1TB Sequential Read Thursday, November 19, 2009

  42. 1TB Random Read Sunday Monday Tuesday Wednes Thursda Friday Saturda day y y 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Done! Thursday, November 19, 2009

  43. Strategy: Batching and Streaming � Fewer reads/writes of large contiguous chunks of data - GFS 64MB chunks Thursday, November 19, 2009

  44. Strategy: Batching and Streaming � Fewer reads/writes of large contiguous chunks of data - GFS 64MB chunks � Requires data locality - BigTable app specified data layout and compression Thursday, November 19, 2009

  45. The CPU Thursday, November 19, 2009

  46. “CPU Bound” CPU access to that Data in RAM data Thursday, November 19, 2009

  47. The Memory Wall Thursday, November 19, 2009

  48. Latency Lags Bandwidth -Dave Patterson Thursday, November 19, 2009

  49. Multicore Makes It Worse! � More cores accelerates the rate of divergence - CPU performance doubled 3x over the past 5 years - Memory performance doubled once Thursday, November 19, 2009

  50. Evolving CPU Memory Access Designs � Intel Nehalem integrated memory controller and new high- speed interconnect � 40 percent shorter latency and increased bandwidth, 4-6x faster system Thursday, November 19, 2009

  51. More CPU evolution � Intel Nehalem-EX - 8 core, 24MB of cache, 2 integrated memory controllers - ring interconnect on-die network designed to speed the movement of data among the caches used by each of the cores � IBM Power 7 - 32MB Level 3 cache � AMD Magny-Cours - 12 cores, 12MB of Level 3 cache Thursday, November 19, 2009

  52. Cache Hit Ratio Thursday, November 19, 2009

  53. Cache Line Awareness � Linked list - Each node as a separate allocation is Bad Thursday, November 19, 2009

  54. Cache Line Awareness � Linked list - Each node as a separate allocation is Bad � Hash table - Reprobe on collision with stride of 1 Thursday, November 19, 2009

  55. Cache Line Awareness � Linked list - Each node as a separate allocation is Bad � Hash table - Reprobe on collision with stride of 1 � Stack allocation - Top of stack is usually in cache, top of the heap is usually not in cache Thursday, November 19, 2009

  56. Cache Line Awareness � Linked list - Each node as a separate allocation is Bad � Hash table - Reprobe on collision with stride of 1 � Stack allocation - Top of stack is usually in cache, top of the heap is usually not in cache � Pipeline processing - Stages of operations on a piece of data do them all at once vs. each stage separately Thursday, November 19, 2009

  57. Cache Line Awareness � Linked list - Each node as a separate allocation is Bad � Hash table - Reprobe on collision with stride of 1 � Stack allocation - Top of stack is usually in cache, top of the heap is usually not in cache � Pipeline processing - Stages of operations on a piece of data do them all at once vs. each stage separately � Optimize for size - Might be faster execution than code optimized for speed Thursday, November 19, 2009

  58. Cycles to Burn � 1) Make fewer HTTP requests � 2) Use a content delivery network � 3) Add an expires header � 4) Gzip components - Use excess compute for compression Thursday, November 19, 2009

  59. Datacenter Thursday, November 19, 2009

  60. Datacenter Storage Heiracrchy - Jeff Dean, Google Thursday, November 19, 2009

  61. Intra-Datacenter Round Trip ~500 miles ~NYC to Columbus, OH x 500,000 Thursday, November 19, 2009

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend