data center switch architecture in the age of merchant
play

Data Center Switch Architecture in the Age of Merchant Silicon - PowerPoint PPT Presentation

Data Center Switch Architecture in the Age of Merchant Silicon Nathan Farrington Erik Rubow Amin Vahdat The Network is a Bottleneck HTTP request amplification Web search (e.g. Google) Small object retrieval (e.g. Facebook) Web


  1. Data Center Switch Architecture in the Age of Merchant Silicon Nathan Farrington Erik Rubow Amin Vahdat

  2. The Network is a Bottleneck • HTTP request amplification – Web search (e.g. Google) – Small object retrieval (e.g. Facebook) – Web services (e.g. Amazon.com) • MapReduce-style parallel computation – Inverted search index – Data analytics • Need high-performance interconnects Hot Interconnects Nathan Farrington 2 August 27, 2009 farrington@cs.ucsd.edu

  3. The Network is Expensive 10GbE 8xGbE . . . 48xGbE TOR Switch . . . . . . 40x1U Servers . . . Rack 1 Rack 2 Rack 3 Rack N Hot Interconnects Nathan Farrington 3 August 27, 2009 farrington@cs.ucsd.edu

  4. What we really need: One Big Switch • Commodity • Plug-and-play • Potentially no oversubscription … Rack 1 Rack 2 Rack 3 Rack N Hot Interconnects Nathan Farrington 4 August 27, 2009 farrington@cs.ucsd.edu

  5. Why not just use a fat tree of commodity TOR switches? M. Al-Fares, A. Loukissas, A. Vahdat. A Scalable, Commodity Data Center Network Architecture. In SIGCOMM ’08. k=4,n=3 Hot Interconnects Nathan Farrington 5 August 27, 2009 farrington@cs.ucsd.edu

  6. 10 Tons of Cable • 55,296 Cat-6 cables • 1,128 separate cable bundles The “Yellow Wall” Hot Interconnects Nathan Farrington 6 August 27, 2009 farrington@cs.ucsd.edu

  7. Merchant Silicon gives us Commodity Switches Maker Broadcom Fulcrum Fujitsu Model BCM56820 FM4224 MB86C69RBC Ports 24 24 26 Cost NDA NDA $410 Power NDA 20 W 22 W Latency < 1 μ s 300 ns 300 ns Area NDA 40 x 40 mm 35 x 35 mm SRAM NDA 2 MB 2.9 MB Process 65 nm 130 nm 90 nm Hot Interconnects Nathan Farrington 7 August 27, 2009 farrington@cs.ucsd.edu

  8. Eliminate Redundancy • Networks of packet switches contain many redundant components – chassis, power PSU CPU ASIC conditioning circuits, FAN FAN PHY cooling – CPUs, DRAM FAN FAN • Repackage these SFP+ SFP+ SFP+ 8 Ports discrete switches to lower the cost and power consumption Hot Interconnects Nathan Farrington 8 August 27, 2009 farrington@cs.ucsd.edu

  9. Our Architecture, in a Nutshell • Fat tree of merchant silicon switch ASICs • Hiding cabling complexity with PCB traces and optics • Partition into multiple pod switches + single core switch array • Custom EEP ASIC to further reduce cost and power • Scales to 65,536 ports when 64-port ASICs become available, late 2009 Hot Interconnects Nathan Farrington 9 August 27, 2009 farrington@cs.ucsd.edu

  10. 3 Different Designs 1 2 3 • 24-ary 3-tree • 720 switch ASICs • 3,456 ports of 10GbE • No oversubscription Hot Interconnects Nathan Farrington 10 August 27, 2009 farrington@cs.ucsd.edu

  11. Network 1: No Engineering Required • 720 discrete packet switches, connected with optical fiber Cost of Parts $4.88M Power 52.7 kW Cabling Complexity 3,456 Footprint 720 RU NRE $0 Cabling complexity (noun): the number of long cables in a data center network. Hot Interconnects Nathan Farrington 11 August 27, 2009 farrington@cs.ucsd.edu

  12. Network 2: Custom Boards and Chassis • 24 “pod” switches, one core switch array, 96 cables Cost of Parts $3.07M Power 41.0 kW Cabling Complexity 96 Footprint 192 RU NRE $3M est This design is shown in more detail later. Hot Interconnects Nathan Farrington 12 August 27, 2009 farrington@cs.ucsd.edu

  13. Switch at 10G, but Transmit at 40G SFP SFP+ QSFP Rate 1 Gb/s 10 Gb/s 40 Gb/s Cost/Gb/s $35* $25* $15* * 2008-2009 Prices Power/Gb/s 500mW 150mW 60mW Hot Interconnects Nathan Farrington 13 August 27, 2009 farrington@cs.ucsd.edu

  14. Network 3: Network 2 + Custom ASIC • Uses 40GbE between pod switches and core switch array; everything else is same as Network 2. Cost of Parts $2.33M Power 36.4 kW EEP Cabling Complexity 96 Footprint 114 RU This simple ASIC provides NRE $8M est tremendous cost and power savings. Hot Interconnects Nathan Farrington 14 August 27, 2009 farrington@cs.ucsd.edu

  15. Cost of Parts 6 5 4 Network 1 3 Network 2 4.88 Network 3 2 3.07 2.33 1 0 Cost of Parts (in millions) Hot Interconnects Nathan Farrington 15 August 27, 2009 farrington@cs.ucsd.edu

  16. Power Consumption 60 50 40 Network 1 30 Network 2 52.7 Network 3 41 20 36.4 10 0 Power Consumption (kW) Hot Interconnects Nathan Farrington 16 August 27, 2009 farrington@cs.ucsd.edu

  17. Cabling Complexity 4,000 3,456 3,500 3,000 2,500 Network 1 2,000 Network 2 1,500 Network 3 1,000 500 96 96 0 Cabling Complexity Hot Interconnects Nathan Farrington 17 August 27, 2009 farrington@cs.ucsd.edu

  18. Footprint 800 700 600 500 Network 1 400 720 Network 2 300 Network 3 200 100 192 114 0 Footprint (in rack units) Hot Interconnects Nathan Farrington 18 August 27, 2009 farrington@cs.ucsd.edu

  19. Partially Deployed Switch Hot Interconnects Nathan Farrington 19 August 27, 2009 farrington@cs.ucsd.edu

  20. Fully Deployed Switch Hot Interconnects Nathan Farrington 20 August 27, 2009 farrington@cs.ucsd.edu

  21. Pod Switch Hot Interconnects Nathan Farrington 21 August 27, 2009 farrington@cs.ucsd.edu

  22. Logical Topology Hot Interconnects Nathan Farrington 22 August 27, 2009 farrington@cs.ucsd.edu

  23. Pod Switch Line Card Hot Interconnects Nathan Farrington 23 August 27, 2009 farrington@cs.ucsd.edu

  24. Pod Switch Uplink Card Hot Interconnects Nathan Farrington 24 August 27, 2009 farrington@cs.ucsd.edu

  25. Core Switch Array Card Hot Interconnects Nathan Farrington 25 August 27, 2009 farrington@cs.ucsd.edu

  26. Why an Ethernet Extension Protocol? • Optical transceivers are 80% of the cost • EEP allows the use of fewer and faster optical transceivers 10GbE 10GbE 40GbE 10GbE 10GbE EEP EEP 10GbE 10GbE 10GbE 10GbE Hot Interconnects Nathan Farrington 26 August 27, 2009 farrington@cs.ucsd.edu

  27. How does EEP work? • Ethernet frames are split up into EEP frames • Most EEP frames are 65 bytes – Header is 1 byte; payload is 64 bytes • Header encodes ingress/egress port EEP EEP Hot Interconnects Nathan Farrington 27 August 27, 2009 farrington@cs.ucsd.edu

  28. How does EEP work? • Round-robin arbiter • EEP frames are transmitted as one large Ethernet frame • 40GbE overclocked by 1.6% EEP EEP Hot Interconnects Nathan Farrington 28 August 27, 2009 farrington@cs.ucsd.edu

  29. Ethernet Frames EEP EEP Hot Interconnects Nathan Farrington 29 August 27, 2009 farrington@cs.ucsd.edu

  30. EEP Frames 3 2 1 1 EEP EEP 3 2 1 2 1 Hot Interconnects Nathan Farrington 30 August 27, 2009 farrington@cs.ucsd.edu

  31. 3 2 1 3 2 1 1 1 EEP EEP 3 2 1 3 2 1 2 1 2 1 Hot Interconnects Nathan Farrington 31 August 27, 2009 farrington@cs.ucsd.edu

  32. EEP Frame Format SOF: Start of Ethernet Frame EOF: End of Ethernet Frame LEN: Set if EEP Frame contains less than 64B of payload Virtual Link ID: Corresponds to port number (0-15) Payload Length: (0-63B) Hot Interconnects Nathan Farrington 32 August 27, 2009 farrington@cs.ucsd.edu

  33. Why not use VLANs? • Because it adds latency and requires more SRAM • FPGA Implementation – VLAN tagging – EEP Hot Interconnects Nathan Farrington 33 August 27, 2009 farrington@cs.ucsd.edu

  34. Latency Measurements Hot Interconnects Nathan Farrington 34 August 27, 2009 farrington@cs.ucsd.edu

  35. Related Work • M. Al-Fares, A. Loukissas, A. Vahdat. A Scalable, Commodity Data Center Network Architecture. In SIGCOMM ’08. • Fat trees of commodity switches, Layer 3 routing, flow scheduling • R. N. Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat. PortLand: A Scalable Fault- Tolerant Layer 2 Data Center Network Fabric. In SIGCOMM ’09. – Layer 2 routing, plug-and-play configuration, fault tolerance, switch software modifications • A. Greenberg, J. R. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. A. Maltz, P. Patel, and S. Sengupta. VL2: A Scalable and Flexible Data Center Network. In SIGCOMM ’09. – Layer 2 routing, end-host modifications Hot Interconnects Nathan Farrington 35 August 27, 2009 farrington@cs.ucsd.edu

  36. Conclusion • General architecture – Fat tree of merchant silicon switch ASICs – Hiding cabling complexity – Pods + Core – Custom EEP ASIC – Scales to 65,536 ports with 64-port ASICs • Design of a 3,456-port 10GbE switch • Design of the EEP ASIC Hot Interconnects Nathan Farrington 36 August 27, 2009 farrington@cs.ucsd.edu

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend