Facebook and the Open Compute Project Charlie Manese - - PowerPoint PPT Presentation
Facebook and the Open Compute Project Charlie Manese - - PowerPoint PPT Presentation
Facebook and the Open Compute Project Charlie Manese Infrastructure NSF SDC - June 22, 2015 January 2015 Data Triplet Mezzanine Center Rack Card v2 Battery Freedom Windmill Open Group Hug Cold Open Cabinet Servers (Intel)
Facebook and the Open Compute Project
Charlie Manese Infrastructure NSF SDC - June 22, 20152011 2013 2012 2014
Watermark (AMD) Mezzanine Card v1 Windmill (Intel) Winterfell Knox Open Rack v1 Group Hug Micro Server (Panther) Honey Badger Cold Storage Open Rack v2 Mezzanine Card v2 Spitfire Server (AMD) Power Supply Battery Cabinet Freedom Servers Data Center Triplet RackOpen data center stack
Open Rack Leopard Knox Wedge Battery Power 6-Pack Cold Storage CoolingServers Network Software Data Center & Storage
HipHop Virtual Machine
5x ¡– ¡6x faster ¡than ¡ Zend
Servers Network Software Data Center & Storage
Original OCP designs
Servers Network Software Data Center & Storage
Energy
Efficiency
38 %
Cost
24 %
Servers Network Software Data Center & Storage
Efficiency gains with OCP
$2 Billion
Efficiency gains with OCP
95,000 Cars
80,000 Homes
Annual Carbon Savings
Annual Energy Savings
Design principles
▪ Efficiency ▪ Scale ▪ Simplicity ▪ Vanity Free ▪ Easy to Operate
DATA CENTER
Facebook greenfield datacenter
Goal
▪ Design and build the most efficient datacenter eco-system possibleControl
▪ Application ▪ Server configuration ▪ Datacenter designPrineville, OR Forest City, NC
Luleå, Sweden
Electrical overview
▪ Eliminate 480V to 208V transformation ▪ Used 480/277VAC distribution to IT equipment ▪ Remove centralized UPS ▪ Implemented 48VDC UPS System ▪ Result a highly efficient electrical system and small failuredomain
Typical Power Prineville Power
Utility Transformer 480/277 VAC 99.999% AvailabilityTotal loss up to server:
2% loss 6% - 12% loss 3% loss 208/120VACAC/DC DC/AC ASTS/PDU SERVER PS
Standby Generator 10% loss (assuming 90% plus PS) Utility Transformer 480/277 VAC 99.9999% Availability 2% loss 480/277VACTotal loss up to server:
FB SERVER PS
Standby Generator48VDC DC UPS
(Stand-by) 5.5% loss UPS 480VAC21% to 27% 7.5%
Reactor power panel
▪ Custom Fabricated RPP ▪ Delivers 165kW, 480/277V, 3-phase to CABlevel
▪ Contains Cam-Lock connector formaintenance wrap around
▪ Line Reactor ▪ Reduces short circuit current < 10kA ▪ Corrects leading power factor towardsunity (3% improvement)
▪ Reduces THD for improved electricalsystem performance (iTHD 2% improvement)
▪ Power consumption = 360 WattBattery cabinet
▪ Custom DC UPS ▪ 56kW or 85kW ▪ 480VAC, 3-phase input ▪ 45 second back-up ▪ 20 sealed VRLA batteries ▪ Battery Validation System ▪ Six 48VDC Output ▪ Two 50A 48VDC aux outputsMechanical overview
▪ Removed ▪ Centralized chiller plant ▪ HVAC ductwork ▪ System Basis of Design ▪ ASHRAE Weather Data: N=50 years ▪ TC9.9 2008: Recommended Envelopes▪ Built-up penthouse air handling system ▪ Server waste heat is used for office space heating
Typical datacenter cooling
Ductless Return Air Fan Wall Evap System 100% Outside Air Intake DUCTLESS SUPPLY Return Air Filter Wall Return Air Plenum Ductless Relief Air Relief Air DATA CENTER SUPPLY DUCTCT CHILLER
AHU
DATA CENTER Return DuctworkPrineville datacenter cooling
PRN datacenter cooling
Cold aisle pressurization – ductless supply
80ºF/27ºC inlet 65% humidity 20ºF/11ºC ΔT
PRN1A1B
85ºF/30ºC inlet 80% humidity 22ºF/11ºC ΔT 85ºF/30ºC inlet 90% humidity 22ºF/11ºC ΔT 85ºF/30ºC inlet 80% humidity 22ºF/11ºC ΔT
PRN1C1D FRC1A1B LLA1A1B
Basis of design comparison
RACK, SERVERS, AND STORAGE
Open Compute Rack: Open Rack
- Well-defined “Mechanical API” between the server and
the rack
- Accepts any size equipment 1U – 10U
- Wide 21” equipment bay for maximum space efficiency
- Shared 12v DC power system
Open Compute Server v2
- First step with shared components by
reusing PSU and fans between two servers
- Increased rack density without
sacrificing efficiency or cost
- All new Facebook deployments in 2012
were “v2” servers
Open Compute Server v3
- Reuses the “v2” half-width
motherboards
- Self-contained sled for Open Rack
- 3-across 2U form factor enables
80mm fans with 45 servers per rack
Open Vault
- Storage JBOD for Open Rack
- Fills the volume of the rack without sacrificing
hot-swap
NETWORK
Traffic growth
Fabric
Wedge
FBOSS
6-Pack
SERVICEABILITY
Complex designs
Typical large datacenter: 1000 Servers per TechnicianComplex Simple designs
Typical large datacenter: 1000 Servers per Technician Facebook datacenter: 25,000 Servers per TechnicianEfficiency through serviceability
Standing ¡at ¡Machine OEM ¡REPAIRS Pre-‑Repair ¡Activities ¡ Min Part ¡Swap ¡Duration ¡ Min Additional ¡ Steps ¡Min Post-‑Repair ¡ Activities ¡Min Total ¡Repair ¡Time ¡ Min Hard ¡Drive ¡(Non-‑raid) 2 3 2 7 DIMM ¡(Offline) 2 3 2 7 Motherboard 2 20 20 2 44 PSU ¡(Hot ¡Swap) 2 5 2 9 OCP#1 ¡REPAIRS Pre-‑Repair ¡Activities ¡ Min Part ¡Swap ¡Duration ¡ Min Additional ¡ Steps ¡Min Post-‑Repair ¡ Activities ¡Min Total ¡Repair ¡Time ¡ Min Hard ¡Drive ¡(Non-‑raid) 0.98 0.98 DIMM ¡(Offline) 0.82 0.82 Motherboard 2.5 10.41 2.5 15.41 PSU ¡(Hot ¡Swap) 0.65 0.65First-time-fix repair rates
85% 86% 87% 88% 89% 90% 91% 92% 93% 94% 95% 96% 97% 98% 99% 100% Jul 12 Aug 12 Sep 12 Oct 12 Nov 12 Dec 12 TargetLet’s engage