facebook and the open compute project
play

Facebook and the Open Compute Project Charlie Manese - PowerPoint PPT Presentation

Facebook and the Open Compute Project Charlie Manese Infrastructure NSF SDC - June 22, 2015 January 2015 Data Triplet Mezzanine Center Rack Card v2 Battery Freedom Windmill Open Group Hug Cold Open Cabinet Servers (Intel)


  1. Facebook and the Open Compute Project Charlie Manese Infrastructure NSF SDC - June 22, 2015

  2. January 2015

  3. Data Triplet Mezzanine Center Rack Card v2 Battery 
 Freedom Windmill Open Group Hug Cold Open Cabinet Servers (Intel) Rack v1 Storage Rack v2 Spitfire 
 Power Watermark 
 Mezzanine Winterfell Knox Micro Server Honey Supply 
 Server (AMD) (AMD) Card v1 (Panther) Badger 2011 2012 2013 2014

  4. Open data center stack Cold Storage Cooling Leopard Knox Open Rack Wedge Battery Power 6-Pack

  5. Software Network HipHop Virtual Machine 5 x ¡– ¡6 x faster ¡than ¡ Servers & Storage Zend Data Center

  6. Software Network Servers & Storage Data Center

  7. Original OCP designs Software Cost Network 38 % 24 Servers & Storage % Energy Efficiency Data Center

  8. Software Network Servers & Storage Data Center

  9. Efficiency gains with OCP $2 Billion

  10. Efficiency gains with OCP 95,000 80,000 Cars Homes Annual Energy Savings Annual Carbon Savings

  11. Design principles ▪ Efficiency ▪ Scale ▪ Simplicity ▪ Vanity Free ▪ Easy to Operate

  12. DATA CENTER

  13. Facebook greenfield datacenter Goal ▪ Design and build the most efficient datacenter eco-system possible Control ▪ Application ▪ Server configuration ▪ Datacenter design

  14. Prineville , OR Forest City, NC Luleå, Sweden

  15. Electrical overview ▪ Eliminate 480V to 208V transformation ▪ Used 480/277VAC distribution to IT equipment ▪ Remove centralized UPS ▪ Implemented 48VDC UPS System ▪ Result a highly efficient electrical system and small failure domain

  16. Typical Power Prineville Power Utility Transformer Utility Transformer Standby Standby 480/277 VAC 480/277 VAC Generator Generator 2% loss 2% loss AC/DC UPS 480VAC DC/AC 6% - 12% loss ASTS/PDU 208/120VAC 99.9999% 99.999% 
 Availability Availability 3% loss 480/277VAC FB SERVER SERVER PS 10% loss 48VDC DC UPS (assuming 90% plus PS (Stand-by) PS) 5.5% loss Total loss up to server: 
 Total loss up to server: 21% to 27% 7.5%

  17. Reactor power panel ▪ Custom Fabricated RPP ▪ Delivers 165kW, 480/277V, 3-phase to CAB level ▪ Contains Cam-Lock connector for maintenance wrap around ▪ Line Reactor ▪ Reduces short circuit current < 10kA ▪ Corrects leading power factor towards unity (3% improvement) ▪ Reduces THD for improved electrical system performance (iTHD 2% improvement) ▪ Power consumption = 360 Watt

  18. Battery cabinet ▪ Custom DC UPS ▪ 56kW or 85kW ▪ 480VAC, 3-phase input ▪ 45 second back-up ▪ 20 sealed VRLA batteries ▪ Battery Validation System ▪ Six 48VDC Output ▪ Two 50A 48VDC aux outputs

  19. Mechanical overview ▪ Removed ▪ Centralized chiller plant ▪ HVAC ductwork ▪ System Basis of Design ▪ ASHRAE Weather Data: N=50 years ▪ TC9.9 2008: Recommended Envelopes ▪ Built-up penthouse air handling system ▪ Server waste heat is used for office space heating

  20. Typical datacenter cooling Return Ductwork CT AHU CHILLER SUPPLY DUCT DATA CENTER Prineville datacenter cooling Relief Air Return Air Ductless Return Air Ductless Relief Air Return 100% Outside Air Air Intake Plenum DUCTLESS DATA CENTER SUPPLY Filter Evap Fan Wall System Wall

  21. PRN datacenter cooling Evap Filter Fan System Wall Wall Mixed Air Supply Air Relief Air 100% Corridor Corridor Corridor Outside Air Intake Relief Air Fan Common Hot Aisle Hot Aisle Return Air Relief Air Return Air Plenum Common Hot Aisle Hot Aisle Cold Aisle Data Center Server Server Cabinets Cabinets

  22. Cold aisle pressurization – ductless supply

  23. Basis of design comparison 80ºF/27ºC inlet 85ºF/30ºC inlet 85ºF/30ºC inlet 85ºF/30ºC inlet 65% humidity 80% humidity 90% humidity 80% humidity 20ºF/11ºC Δ T 22ºF/11ºC Δ T 22ºF/11ºC Δ T 22ºF/11ºC Δ T PRN1A1B PRN1C1D FRC1A1B LLA1A1B

  24. RACK, SERVERS, AND STORAGE

  25. Open Compute Rack: Open Rack • Well-defined “Mechanical API” between the server and the rack • Accepts any size equipment 1U – 10U • Wide 21” equipment bay for maximum space efficiency • Shared 12v DC power system

  26. Open Compute Server v2 • First step with shared components by reusing PSU and fans between two servers • Increased rack density without sacrificing efficiency or cost • All new Facebook deployments in 2012 were “v2” servers

  27. Open Compute Server v3 • Reuses the “v2” half-width motherboards • Self-contained sled for Open Rack • 3-across 2U form factor enables 80mm fans with 45 servers per rack

  28. Open Vault • Storage JBOD for Open Rack • Fills the volume of the rack without sacrificing hot-swap

  29. NETWORK

  30. Traffic growth

  31. Fabric

  32. Wedge

  33. FBOSS

  34. 6-Pack

  35. SERVICEABILITY

  36. Complex designs Typical large datacenter: 1000 Servers per Technician

  37. Complex Simple designs Typical large datacenter: 1000 Servers per Technician Facebook datacenter: 25,000 Servers per Technician

  38. Efficiency through serviceability Standing ¡at ¡Machine OEM ¡REPAIRS Pre-­‑Repair ¡Activities ¡ Part ¡Swap ¡Duration ¡ Additional ¡ Post-­‑Repair ¡ Total ¡Repair ¡Time ¡ Min Min Steps ¡Min Activities ¡Min Min Hard ¡Drive ¡(Non-­‑raid) 2 3 0 2 7 DIMM ¡(Offline) 2 3 0 2 7 Motherboard 2 20 20 2 44 PSU ¡(Hot ¡Swap) 2 5 0 2 9 OCP#1 ¡REPAIRS Pre-­‑Repair ¡Activities ¡ Part ¡Swap ¡Duration ¡ Additional ¡ Post-­‑Repair ¡ Total ¡Repair ¡Time ¡ Min Min Steps ¡Min Activities ¡Min Min Hard ¡Drive ¡(Non-­‑raid) 0 0.98 0 0 0.98 DIMM ¡(Offline) 0 0.82 0 0 0.82 Motherboard 2.5 10.41 2.5 0 15.41 PSU ¡(Hot ¡Swap) 0 0.65 0 0 0.65

  39. First-time-fix repair rates 100% 99% 98% 97% 96% 95% Target 94% 93% 92% 91% 90% 89% 88% 87% 86% 85% Jul 12 Aug 12 Sep 12 Oct 12 Nov 12 Dec 12

  40. Let’s engage

  41. KEYNOTE

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend