Symbiosis in Scale Out Networking and Data Management Amin Vahdat - - PowerPoint PPT Presentation

symbiosis in scale out networking and data management
SMART_READER_LITE
LIVE PREVIEW

Symbiosis in Scale Out Networking and Data Management Amin Vahdat - - PowerPoint PPT Presentation

Symbiosis in Scale Out Networking and Data Management Amin Vahdat Google/UC San Diego vahdat@google.com Overview Large-scale data processing needs scale out networking Unlocking the potential of modern server hardware for at scale


slide-1
SLIDE 1

Symbiosis in Scale Out Networking and Data Management

Amin Vahdat Google/UC San Diego vahdat@google.com

slide-2
SLIDE 2

Overview

§ Large-scale data processing needs scale out networking

  • Unlocking the potential of modern server hardware for

at scale problems requires orders of magnitude improvement in network performance

§ Scale out networking requires large-scale data management

  • Experience with Google’s SDN WAN suggests that

logically centralized state management critical for cost- effective deployment and management

  • Still in the stone ages in dynamically managing state

and getting updates to the right places in the network

slide-3
SLIDE 3

Overview

§ Large-scale data processing needs scale out networking

  • Unlocking the potential of modern server hardware for

at scale problems requires orders of magnitude improvement in network performance

§ Scale out networking requires large-scale data management

  • Experience with Google’s SDN WAN suggests that

logically centralized state management critical for cost- effective deployment and management

  • Still in the stone ages in dynamically managing state

and getting updates to the right places in the network

WARNING: Networking is about to reinvent many aspects of centrally managed, replicated state with variety of consistency requirements in a distributed environment

slide-4
SLIDE 4

Vignette 1: Large-Scale Data Processing Needs Scale Out Networking

slide-5
SLIDE 5

Motivation

Blueprints for 200k sq. ft. Data Center in OR

slide-6
SLIDE 6

San Antonio Data Center

slide-7
SLIDE 7

Chicago Data Center

slide-8
SLIDE 8

Dublin Data Center

slide-9
SLIDE 9

All Filled with Commodity Computation and Storage

slide-10
SLIDE 10

Network Design Goals

§ Scalable interconnection bandwidth

  • Full bisection bandwidth between all pairs of hosts
  • Aggregate bandwidth = # hosts × host NIC capacity

§ Economies of scale

  • Price/port constant with number of hosts
  • Must leverage commodity merchant silicon

§ Anything anywhere

  • Don’t let the network limit benefits of virtualization

§ Management

  • Modular design
  • Avoid actively managing 100’s-1000’s network elements
slide-11
SLIDE 11

Scale Out Networking

§ Advances toward scale out computing and storage

  • Aggregate computing and storage grows linearly with

the number of commodity processors and disks

  • Small matter of software to enable functionality
  • Alternative is scale up where weaker processors and

smaller disks are replaced with more powerful parts

§ Today, no technology for scale out networking

  • Modules to expand number of ports or aggr BW
  • No management of individual switches, VLANs, subnets
slide-12
SLIDE 12

The Future Internet

§ Applications and data will be partitioned and replicated across multiple data centers

  • 99% of compute, storage, communication will be inside

the data center

  • Data Center Bandwidth Exceeds that of the Access

§ Data sizes will continue to explode

  • From click streams, to scientific data, to user audio, photo,

and video collections

§ Individual user requests and queries will run in parallel

  • n thousands of machines

§ Back end analytics and data processing will dominate

slide-13
SLIDE 13

Emerging Rack Architecture

§ Can we leverage emerging merchant switch and newly proposed optical transceivers and switches to treat entire data center as single logical computer

Core Cache Core Cache Core Cache Core Cache L3 Cache DRAM (10’s GB)

DDR3-1600 100Gb/s 5ns PCIe 3.0 x16 128Gb/s ~250ns

TBs storage 2x10GigE NIC 24-port 40 GigE switch 150 ns latency

slide-14
SLIDE 14

Amdahl’s (Lesser Known) Law

§ Balanced Systems for parallel computing § For every 1Mhz of processing power must have

  • 1MB of memory
  • 1 Mbit/sec I/O
  • In the late 1960’s

§ Fast forward to 2012

  • 4x2.5Ghz processors, 8 cores
  • 30-60Ghz of processing power (not that simple!)
  • 24-64GB memory
  • But 1Gb/sec of network bandwidth??

§ Deliver 40 Gb/s bandwidth to 100k servers?

  • 4 Pb/sec of bandwidth required today
slide-15
SLIDE 15

Sort as Instance of Balanced Systems

§ Hypothesis: significant efficiency lost in systems that bottleneck on one resource § Sort as example § Gray Sort 2009 record

  • 100 TB in 173 minutes on 3452 servers
  • ~22.3 Mb/s/server

§ Out of core sort: 2 reads and 2 writes required § What would it take to sort 3.2 Gb/s/server?

  • 4x100 MB/sec/node with 16 500 GB-disks/server
  • 100 TB in 83 minutes on 50 server?
slide-16
SLIDE 16

TritonSort Phase 1

Reader Input Disk Node Distributor Sender Receiver LogicalDisk Distributor Writer Intermediate Disk Network Producer Buffer Pool Sender Node Buffer Pool Receiver Node Buffer Pool Writer Buffer Pool 8 8 8 8 2

Map and Shuffle

slide-17
SLIDE 17

TritonSort Phase 2

Intermediate Disk Reader Phase2 Buffer Pool Sorter Writer Output Disk 8 8 4 8 8

Reduce

slide-18
SLIDE 18

Reverse Engineering the Pipeline

§ Goal: minimize number of logical disks

  • Phase 2: read, sort, write (repeat)
  • One sorter/core
  • Need 24 buffers (3/core)
  • ~20GB/server: 830MB/logical disk
  • 2TB/830MB/logical disk è ~2400 logical disks

§ Long pole in phase 1: LogicalDiskDistributor buffering sufficient data for streaming write

  • ~18GB/2400 logical disks = 7.5MB buffer
  • ~15% seek penalty
slide-19
SLIDE 19

Balanced Systems Really Do Matter

§ Balancing network and I/O results in huge efficiency improvements

  • How much is a factor of 100 improvement worth in

terms of cost?

  • “TritonSort: A Balanced Large-scale Sorting System,”

Rasmussen, et al., NSDI 2011.

System Duration

  • Aggr. Rate

Servers Rate/server Yahoo (100TB) 173 min 9.6 GB/s 3452 2.8 MB/s TritonSort (100TB) 107 min 15.6 GB/s 52 300 MB/s

slide-20
SLIDE 20

TritonSort Results

§ http://www.sortbenchmark.org § Hardware

  • HP DL-380 2U servers, 8-2.5 Ghz cores, 24 GB RAM,

16x500-GB disks, 2x10 Gb/s Myricom NICs

  • 52-port Cisco Nexus 5020 switch

§ Results 2010

  • GraySort: 100 TB in 123 mins/48 nodes, 2.3 Gb/s/server
  • MinuteSort: 1014 GB in 59 secs/52 nodes, 2.6 Gb/s/server

§ Results 2011

  • GraySort: 100 TB in 107 mins/52 nodes, 2.4 Gb/s/server
  • MinuteSort: 1353 GB in 1 min/52 nodes, 3.5 Gb/s/server
  • JouleSort: 9700 records/Joule
slide-21
SLIDE 21

Generalizing TritonSort – Themis-MR

§ TritonSort’s very constrained

  • 100B records, even key distribution

§ Generalize with same performance?

  • MapReduce natural choice: map → sort

→ reduce

§ Skew:

  • Partition, compute, record size, …
  • Memory management now hard

§ Task-level to job-level fault tolerance for performance

  • Long tail of small- to medium-sized jobs
  • n <= 1PB of data
slide-22
SLIDE 22

Current Status

§ Themis-MR outperforms Hadoop 1.0 by ~8x on 28 node, 14TB GraySort

  • 30 minutes vs. 4 hours

§ Implementations of CloudBurst, PageRank, Word Count being evaluated § Alpha version won 2011 Daytona GraySort

  • Beat previous record holder by 26%, 1/70 nodes
slide-23
SLIDE 23

Driver: Nonblocking Multistage Datacenter Topologies

  • M. Al-Fares, A. Loukissas, A. Vahdat. A Scalable,

Commodity Data Center Network Architecture. In SIGCOMM ’08.

k=4,n=3

slide-24
SLIDE 24

Scalability Using Identical Network Elements

Pod 3 Pod 2 Pod 1 Pod 0 Core

 Fat tree built from 4-port switches

slide-25
SLIDE 25

Scalability Using Identical Network Elements

Pod 3 Pod 2 Pod 1 Pod 0 Core

 Support 16 hosts organized into 4 pods

  • Each pod is a 2-ary 2-tree
  • Full bandwidth among pod-connected hosts
slide-26
SLIDE 26

Scalability Using Identical Network Elements

Pod 3 Pod 2 Pod 1 Pod 0 Core

 Full bisection bandwidth at each level of fat tree

  • Rearrangeably Nonblocking
  • Entire fat-tree is a 2-ary 3-tree
slide-27
SLIDE 27

Scalability Using Identical Network Elements

Pod 3 Pod 2 Pod 1 Pod 0 Core

 (5k2/4) k-port switches support k3/4 hosts

  • 48-port switches: 27,648 hosts using 2,880 switches

 Critically, approach scales to 10 GigE at the edge

slide-28
SLIDE 28

Scalability Using Identical Network Elements

Pod 3 Pod 2 Pod 1 Pod 0 Core

 Regular structure simplifies design of network protocols  Opportunities: performance, cost, energy, fault tolerance,

incremental scalability, etc.

slide-29
SLIDE 29

Problem - 10 Tons of Cabling

§ 55,296 Cat-6 cables § 1,128 separate cable bundles § If optics used for transport, transceivers are ~80% of cost of interconnect

The “Yellow Wall”

slide-30
SLIDE 30

Our Work

§ Switch Architecture [SIGCOMM 08] § Cabling, Merchant Silicon [Hot Interconnects 09] § Virtualization, Layer 2, Management [SIGCOMM 09,SOCC11a] § Routing/Forwarding [NSDI 10] § Hybrid Optical/Electrical Switch [SIGCOMM 10,SOCC11b] § Applications [NSDI11, FAST12] § Low latency communication [NSDI12,ongoing] § Transport Layer [EuroSys12,ongoing] § Wireless augment [SIGCOMM12]

slide-31
SLIDE 31

Vignette 2: Software Defined Networking Needs Data Management

slide-32
SLIDE 32

Network Protocols Past and Future

§ Historically, goal of network protocols is to eliminate centralization

  • Every network element should act autonomously,

using local information to effect global targets for fault tolerance, performance, policy, security

  • The Internet probably would not have happened

without such decentralized control

§ Recent trends toward Software Defined Networking

  • Deeper understanding of building scalable, fault

tolerant logically centralized services

  • Majority of network elements and bandwidth in data

centers under the control of a single entity

  • Requirements for virtualization and global policy
slide-33
SLIDE 33

Software Defined Networking (SDN)

§ Separate control plane from data plane § Open Network Foundation and OpenFlow protocol leading the charge to enabling SDN

  • OFC à OFA API?

Switch

OFA

Switch

OFA

Switch

OFA

Switch

OFA

OpenFlow Controller Routing Service VM Manager External Protocol

slide-34
SLIDE 34

SDN Challenges

§ Control plane replication, fault tolerance, scale § No fate sharing between control and data plane § Configuration management

  • When new router comes online or topology changes,

how to push information to the right places?

§ Network management: adaptively drill down to retrieve appropriate network state § Virtualization and multiple control planes § All challenges of large-scale distributed databases § State of the art: CSV files and perl scripts

slide-35
SLIDE 35

Google’s Software Defined WAN Architecture

slide-36
SLIDE 36

ATLAS 2010 Traffic report

Posted on Monday, October 25th, 2010 | Bookmark on del.icio.us

Google Sets New Internet Traffic Record by Craig Labovitz

This month, Google broke an equally impressive Internet traffic record — gaining more than 1% of all Internet traffic share since January. If Google were an ISP, as

  • f this month it would rank as the second largest carrier on the planet.

Only one global tier1 provider still carries more traffic than Google (and this ISP also provides a large portion of Google’s transit).

slide-37
SLIDE 37

Cloud Computing Requires Massive Wide-Area Bandwidth

  • Low latency access from global audience and highest

levels of availability

  • Vast majority of data migrating to cloud
  • Data must be replicated at multiple sites
  • WAN unit costs decreasing rapidly
  • But not quickly enough to keep up with even faster

increase in WAN bandwidth demand

slide-38
SLIDE 38

WAN Cost Components

  • Hardware
  • Routers
  • Transport gear
  • Fiber
  • Overprovisioning
  • Shortest path routing
  • Slow convergence time
  • Maintain SLAs despite failures
  • No traffic differentiation
  • Operational expenses/human costs
  • Box-centric versus fabric-centric views
slide-39
SLIDE 39

Why Software Defined WAN

  • Separate hardware from software
  • Choose hardware based on necessary features
  • Choose software based on protocol requirements
  • Logically centralized network control
  • Automation: Separate monitoring, management, and
  • peration from individual boxes
  • Flexibility and Innovation

Result: A WAN that is more efficient, higher performance, more fault tolerant, and cheaper

slide-40
SLIDE 40

A Warehouse-Scale-Computer (WSC) Network

Carrier/ISP Edge Carrier/ISP Edge Carrier/ISP Edge

Google Data Center Google Data Center Google Data Center

Google Edge Google Edge Google Edge

slide-41
SLIDE 41

Google's WAN

  • Two backbones
  • I-Scale: Internet facing (user traffic)
  • G-Scale: Datacenter traffic (internal)
  • Widely varying requirements: loss sensitivity, topology,

availability, etc.

  • Widely varying traffic characteristics: smooth/diurnal vs.

bursty/bulk

slide-42
SLIDE 42

Google's Software Defined WAN

slide-43
SLIDE 43

G-Scale Network Hardware

  • Built from merchant silicon
  • 100s of ports of

nonblocking 10GE

  • OpenFlow support
  • Open source routing stacks for

BGP, ISIS

  • Does not have all features
  • No support for AppleTalk...
  • Multiple chassis per site
  • Fault tolerance
  • Scale to multiple Tbps
slide-44
SLIDE 44

G-Scale WAN Deployment

  • Multiple switch chassis in each domain
  • Custom hardware running Linux
  • Quagga BGP stack, ISIS/IBGP for internal connectivity

DC Network

WAN

DC Network

slide-45
SLIDE 45

Mixed SDN Deployment

Cluster Border Router Data Center Network EBGP IBGP/ISIS to remote sites (not representative of actual topology)

slide-46
SLIDE 46

Mixed SDN Deployment

Cluster Border Router Data Center Network EBGP IBGP/ISIS to remote sites

Quagga OFC Glue Paxos Paxos Paxos

slide-47
SLIDE 47

Mixed SDN Deployment

Cluster Border Router Data Center Network EBGP IBGP/ISIS to remote sites

Quagga OFC Glue Paxos Paxos Paxos

OFA OFA OFA OFA

IBGP/ISIS to remote sites EBGP

slide-48
SLIDE 48

Mixed SDN Deployment

  • SDN site delivers full interoperability with legacy sites

Cluster Border Router Data Center Network EBGP IBGP/ISIS to remote sites

Quagga OFC Glue Paxos Paxos Paxos

OFA OFA OFA OFA OFA OFA OFA OFA

slide-49
SLIDE 49

Mixed SDN Deployment

  • Ready to introduce new functionality, e.g., TE

Cluster Border Router Data Center Network EBGP IBGP/ISIS to remote sites

Quagga OFC RCS Paxos Paxos Paxos

OFA OFA OFA OFA OFA OFA OFA OFA

TE Server

slide-50
SLIDE 50

Google Confidential and Proprietary

Bandwidth Broker and Traffic Engineering

slide-51
SLIDE 51

High Level Architecture

SDN WAN (N sites) SDN Gateway to Sites TE Server Collection / Enforcement B/W Broker TE and B/w Allocation Traffic sources for WAN Control Plane Data Plane SDN API

slide-52
SLIDE 52

High Level Architecture

SDN WAN (N sites) SDN Gateway to Sites TE Server Collection / Enforcement B/W Broker

TE and B/w Allocation

Traffic sources for WAN

Control Plane Data Plane

SDN API

slide-53
SLIDE 53

Global Broker Site Broker Usage Limits Network Model Admin Policies Global demand to TE-Server

Bandwidth Broker Architecture

Data Center Site Broker Data Center (optional)

slide-54
SLIDE 54

High Level Architecture

SDN WAN (N sites)

SDN Gateway to Sites TE Server Collection / Enforcement B/W Broker

TE and B/w Allocation Traffic sources for WAN

Control Plane Data Plane

SDN API

slide-55
SLIDE 55

TE Server Architecture

Path Selection

Gateway

OFC S1 OFC Sn

Global Broker

Demand Matrix {src, dst --> utility curve } Flow Manager Topology Manager Path Allocation Algorithm Per Site Path Manipulation Commands

Devices Devices

TE Server

Site level edges with RTT and Capacity Abstract Path Assignment {src, dst --> paths and weights } Interface up/down status

slide-56
SLIDE 56

High Level Architecture

SDN WAN (N sites)

SDN Gateway to Sites TE Server Collection / Enforcement B/W Broker

TE and B/w Allocation Traffic sources for WAN

Control Plane Data Plane

SDN API

slide-57
SLIDE 57

Controller Architecture

OFA ... HW Tables Switches in DC 1 OFC

Routing (Quagga) Tunneling App

TE Server / SDN Gateway

Flows Flows

TE ops

Topo / routes

slide-58
SLIDE 58

Controller Architecture

...

Site 2

...

Site 1

...

Site 3 SDN Gateway TE Server non-TE (ISIS) path

slide-59
SLIDE 59

Sample Utilization

slide-60
SLIDE 60

Benefits of Aggregation

slide-61
SLIDE 61

Convergence under Failures

TE Server A B C

  • ld tunnel

new tunnel failure notification new ops

no-TE: traffic drop ~ 9 sec with-TE: traffic drop ~ 1 sec Without TE: Failure detection and convergence is slower:

  • Delay 'inside' TE << timers for detecting

and communicating failures (in ISIS)

  • Fast failover may be milliseconds, but not

guaranteed to be either accurate or "good"

slide-62
SLIDE 62

G-Scale WAN History

Exit testing "opt in" network SDN rollout SDN fully Deployed Central TE Deployed

slide-63
SLIDE 63

Range of Failure Scenarios

TE1* GW1

OFC1* OFC2 Router

TE2 GW2

OFC1 OFC2* Router

* indicates mastership Potential failure condition

slide-64
SLIDE 64

Trust but Verify: Consistency Checks

TE View OFC View Is Valid Comment Clean Clean yes Normal operation. Clean Dirty no OFC remains dirty forever Clean Missing no OFC will forever miss entry Dirty Dirty yes Both think Op failed Dirty Clean yes Op succeeded but response not yet received by TE Dirty Missing yes Op issued but not received by OFC Missing Clean no OFC has extra entry, and will remain like that Missing Dirty no (same as above)

slide-65
SLIDE 65

Implications for ISPs

  • Dramatically reduce the cost of WAN deployment
  • Cheaper per bps in both CapEx and OpEx
  • Less overprovisioning for same SLAs
  • Differentiator for end customers
  • Less cost for same BW or more BW for same cost
  • Possible to deploy incrementally in pre-existing network
  • Deployment experience with Google's global SDN

production WAN suggests SDN is real and it works

  • But it’s just the beginning
slide-66
SLIDE 66

Conclusions

§ Large-scale data processing needs scale out networking

  • Unlocking the potential of modern server hardware for

at scale problems requires orders of magnitude improvement in network performance

§ Scale out networking requires large-scale data management

  • Experience with Google’s SDN WAN suggests that

logically centralized state management critical for cost- effective deployment and management

  • Still in the stone ages in dynamically managing state

and getting updates to the right places in the network

slide-67
SLIDE 67

Thank you!