Edge Fabric: Delivering Oceans of Content to the World Brandon - - PowerPoint PPT Presentation

edge fabric
SMART_READER_LITE
LIVE PREVIEW

Edge Fabric: Delivering Oceans of Content to the World Brandon - - PowerPoint PPT Presentation

Edge Fabric: Delivering Oceans of Content to the World Brandon Schlinker 1,2 Hyojeong Kim 1 , Timothy Cui 1 , Ethan Katz-Bassett 2,3 , Harsha V. Madhyastha 4 , Italo Cunha 5 James Quinn 1 , Saif Hasan 1 , Petr Lapukhov 1 , James Hongyi Zeng 1 1


slide-1
SLIDE 1

IETF 104, March 2019

Edge Fabric:

Delivering Oceans of Content to the World

Brandon Schlinker

Hyojeong Kim1, Timothy Cui1, Ethan Katz-Bassett2,3, Harsha V. Madhyastha4, Italo Cunha5

1

1 Facebook, 2 University of Southern California, 3 Columbia University, 4 University of Michigan, 5 Universidade Federal de Minas Gerais

James Quinn1, Saif Hasan1, Petr Lapukhov1, James Hongyi Zeng1

1,2

slide-2
SLIDE 2

Facebook's Global Network

points of presence around the world interconnect with thousands of networks

slide-3
SLIDE 3

Benefits of Rich Interconnection

short, direct path

Tier 1

short, direct paths bypass transit providers

slide-4
SLIDE 4

Benefits of Rich Interconnection

short, direct path

short, direct paths bypass transit providers

Tier 1

multiple, diverse paths

substantial path diversity

slide-5
SLIDE 5

Basics of Interconnection

slide-6
SLIDE 6

Basics of Interconnection

slide-7
SLIDE 7

Basics of Interconnection

Edge Router

slide-8
SLIDE 8

Basics of Interconnection

Edge Router

Network A Network B

Establish physical circuits 1

slide-9
SLIDE 9

Basics of Interconnection

Establish physical circuits 1 Edge Router

BGP

Network A Network B

slide-10
SLIDE 10

Basics of Interconnection

Establish physical circuits 1 Exchange reachability information via BGP 2 Edge Router

BGP

Network A Network B

slide-11
SLIDE 11

Basics of Interconnection

Establish physical circuits 1 Exchange reachability information via BGP 2 Edge Router

BGP

Network A Network B

slide-12
SLIDE 12

Basics of Interconnection

Establish physical circuits 1 Exchange reachability information via BGP 2 Edge Router

BGP

Network A Network B

slide-13
SLIDE 13

Basics of Interconnection

Establish physical circuits 1 Exchange reachability information via BGP 2 Edge Router

BGP

Network A Network B

slide-14
SLIDE 14

Basics of Interconnection

Establish physical circuits 1 Exchange reachability information via BGP 2 Edge Router

BGP

Network A Network B

203.0.113.0/24

slide-15
SLIDE 15

Basics of Interconnection

Establish physical circuits 1 Exchange reachability information via BGP 2 Edge Router

BGP

Network A Network B

Route 1

203.0.113.0/24 203.0.113.0/24

slide-16
SLIDE 16

Basics of Interconnection

Establish physical circuits 1 Exchange reachability information via BGP 2 Edge Router

BGP

Network A Network B

Route 1

203.0.113.0/24

Route 2

203.0.113.0/24 203.0.113.0/24

slide-17
SLIDE 17

Exchange reachability information via BGP

Basics of Interconnection

Establish physical circuits 1 2 Edge Router

BGP

Network A Network B

Route 1

203.0.113.0/24

Route 2

203.0.113.0/24 203.0.113.0/24

slide-18
SLIDE 18

Exchange reachability information via BGP

Basics of Interconnection

Establish physical circuits 1 2 Edge Router

BGP

Network A Network B

Route 1

203.0.113.0/24

Route 2

203.0.113.0/24 203.0.113.0/24

BGP at router selects which route to use 3

Route 1

203.0.113.0/24

SELECTED ROUTE

slide-19
SLIDE 19

Challenges to Using Our Connectivity

  • bjective

deliver traffic with the best performance possible

slide-20
SLIDE 20

Challenges to Using Our Connectivity

  • bjective

deliver traffic with the best performance possible

challenge

BGP does not consider demand, capacity or performance

slide-21
SLIDE 21

100 Gbps capacity 10 Gbps capacity

Route A

BGP Does Not Consider Demand and Capacity

Router

Route B

Tier 1

ISP

5 Gbps demand

slide-22
SLIDE 22

100 Gbps capacity 10 Gbps capacity 5 Gbps load

Route A

BGP Does Not Consider Demand and Capacity

Router

Route B

Tier 1

ISP

| selected by BGP

5 Gbps demand

slide-23
SLIDE 23

100 Gbps capacity 10 Gbps capacity 5 Gbps load

Route A

BGP Does Not Consider Demand and Capacity

5 Gbps demand

Router

Route B

12

Tier 1

ISP

12 Gbps load (overloaded) | selected by BGP

Cannot configure BGP to adapt to demand/capacity in real time

Not possible to express with BGP policy terms

slide-24
SLIDE 24

Poor performance

2% loss

Best performance Tier 1

BGP Does Not Consider Performance

Route A

Router

Route B

ISP

+50 ms

5 Gbps demand

slide-25
SLIDE 25

Poor performance

+50 ms 2% loss

Best performance Tier 1

BGP Does Not Consider Performance

Route A

Router

Route B

ISP

| selected by BGP

Cannot configure BGP to adapt to performance in real time

Not possible to express with BGP policy terms 5 Gbps demand

slide-26
SLIDE 26

BGP is fundamental to interconnection and it's not going away

slide-27
SLIDE 27

Sidestepping BGP's Limitations

  • bjective

deliver traffic with the best performance possible

challenge

BGP does not consider demand, capacity or performance

approach

shift control from BGP at routers to a software controller

slide-28
SLIDE 28

Outline

Overview 1

slide-29
SLIDE 29

Outline

Facebook's Connectivity and Challenges 2 Overview 1

slide-30
SLIDE 30

Outline

Facebook's Connectivity and Challenges 2 Sidestepping BGP's Limitations with Edge Fabric 3 Overview 1

slide-31
SLIDE 31

Outline

Facebook's Connectivity and Challenges 2 Sidestepping BGP's Limitations with Edge Fabric 3 4 Results from Edge Fabric's Behavior in Production Overview 1

slide-32
SLIDE 32

Evolution and Related Work

Outline

Facebook's Connectivity and Challenges 2 Sidestepping BGP's Limitations with Edge Fabric 3 4 Results from Edge Fabric's Behavior in Production 5 Overview 1

slide-33
SLIDE 33

Connectivity at a Point of Presence (PoP)

Transit Providers

deliver traffic to entire Internet

Two or more # per PoP Interconnection Private circuit

slide-34
SLIDE 34

Peers

end-user ISPs, mobile providers

Connectivity at a Point of Presence (PoP)

Transit Providers

deliver traffic to entire Internet

Two or more # per PoP

Private Peers

Interconnection Private circuit Tens Private circuit

slide-35
SLIDE 35

Peers

end-user ISPs, mobile providers

Connectivity at a Point of Presence (PoP)

Transit Providers

deliver traffic to entire Internet

Two or more # per PoP

via Internet Exchange Point

Tens Hundreds

Private Peers IXP Peers

Interconnection Private circuit Private circuit Shared fabric

slide-36
SLIDE 36

We prefer routes from private peers > IXP peers > transits

slide-37
SLIDE 37

We prefer routes from private peers > IXP peers > transits

peers > transits

peers provide short, direct paths to end users

slide-38
SLIDE 38

We prefer routes from private peers > IXP peers > transits

private > IXP peers

prefer circuits dedicated to Facebook and peer

peers > transits

peers provide short, direct paths to end users

slide-39
SLIDE 39

Peers

end-user ISPs, mobile providers

Connectivity at a Point of Presence (PoP)

Transit Providers

deliver traffic to entire Internet

Two or more # per PoP

Private Peers IXP Peers

Interconnection Private circuit majority of traffic Tens Hundreds Private circuit Shared fabric

via Internet Exchange Point

slide-40
SLIDE 40

We cannot acquire sufficient capacity with private peers to satisfy demand

slide-41
SLIDE 41

We cannot acquire sufficient capacity with private peers to satisfy demand

10 Gbps capacity

Router 12 Gbps load (overloaded)

| selected by BGP

Private Peer 12 Gbps demand

slide-42
SLIDE 42

Why not just acquire more peering capacity?

slide-43
SLIDE 43

Why not just acquire more peering capacity?

Peers often cannot provision capacity technical constraints, business constraints

slide-44
SLIDE 44

Why not just acquire more peering capacity?

Peers often cannot provision capacity technical constraints, business constraints Even when peers agree to add capacity provisioning can be slow (months) little headroom for traffic bursts or circuit failures

slide-45
SLIDE 45

How bad is the problem?

Why not just acquire more peering capacity?

slide-46
SLIDE 46

Capacity Constraints in Production

Over a two-day study of 20 PoPs (subset of production) identified circuits predicted to have demand > capacity

slide-47
SLIDE 47

Capacity Constraints in Production

Over a two-day study of 20 PoPs (subset of production) identified circuits predicted to have demand > capacity

17 out of 20 PoPs

had at least one circuit

slide-48
SLIDE 48

Capacity Constraints in Production

Over a two-day study of 20 PoPs (subset of production) identified circuits predicted to have demand > capacity

18% of all circuits 17 out of 20 PoPs

had at least one circuit

slide-49
SLIDE 49

Capacity Constraints in Production

0.2 0.4 0.6 0.8 1

0.5 1 1 2 4 5

CDF of circuits where demand > capacity

Peak demand relative to capacity 3

Circuit's peak demand to capacity

For circuits predicted to have demand > capacity at least once

slide-50
SLIDE 50

Capacity Constraints in Production

0.2 0.4 0.6 0.8 1

0.5 1 1 2 4 5 Peak demand relative to capacity 3

50% of circuits had peak demand ≥ 1.19x capacity

Circuit's peak demand to capacity

For circuits predicted to have demand > capacity at least once

CDF of circuits where demand > capacity

slide-51
SLIDE 51

Capacity Constraints in Production

0.2 0.4 0.6 0.8 1

0.5 1 1 2 4 5 Peak demand relative to capacity 3

50% of circuits had peak demand ≥ 1.19x capacity 10% of circuits had peak demand ≥ 2x capacity

Circuit's peak demand to capacity

For circuits predicted to have demand > capacity at least once

CDF of circuits where demand > capacity

slide-52
SLIDE 52

Recall: BGP Does Not Consider Demand and Capacity 10% of overloaded circuits had peak demand ≥ 2x capacity

slide-53
SLIDE 53

Recall: BGP Does Not Consider Demand and Capacity 10% of overloaded circuits had peak demand ≥ 2x capacity

We need a better solution than BGP

so we built Edge Fabric

slide-54
SLIDE 54

Evolution and Related Work

Outline

Facebook's Connectivity and Challenges 2 Sidestepping BGP's Limitations with Edge Fabric 3 4 Results from Edge Fabric's Behavior in Production 5 Overview 1

slide-55
SLIDE 55

Sidestepping BGP's Limitations

  • bjective

deliver traffic with the best performance possible

challenge

BGP does not consider demand, capacity or performance

approach

shift control from BGP at routers to a software controller

slide-56
SLIDE 56

Design Priorities

Operational simplicity

minimize change and system complexity

slide-57
SLIDE 57

Design Priorities

Operational simplicity Ease of deployment

minimize change and system complexity interoperate with existing infrastructure and tooling

slide-58
SLIDE 58

Responsibility for Routing

Traditional routers Route per destination from BGP Host-based routing Route per packet dictated by hosts

Operational simplicity Ease of deployment

design priorities

slide-59
SLIDE 59

Responsibility for Routing

Traditional routers Route per destination from BGP Host-based routing Route per packet dictated by hosts

Edge Fabric's approach:

Controller overrides BGP's decisions at router Hosts provide hints on packet priority Operational simplicity Ease of deployment

design priorities

slide-60
SLIDE 60

Edge Fabric's Approach to Control

Router

BGP sessions

BGP

48

1

Router selects routes using BGP

Route A

slide-61
SLIDE 61

BGP routes

Edge Fabric's Approach to Control

Additional Inputs

Router

BGP sessions

1

Router selects routes using BGP Edge Fabric selects ideal routes
 using BGP routes + other inputs

2

BGP Edge Fabric

49

Route A

slide-62
SLIDE 62

Edge Fabric's Approach to Control

Additional Inputs

Edge Fabric

50

Prefix traffic rates Route performance measurements BGP routes (from router) Advanced policy

40 Gbps 1 Gbps

Circuit capacities Inputs to Edge Fabric

slide-63
SLIDE 63

Edge Fabric's Approach to Control

Additional Inputs

Edge Fabric

51

Prefix traffic rates Route performance measurements BGP routes (from router) Advanced policy

40 Gbps 1 Gbps

Circuit capacities Inputs to Edge Fabric

Route B

slide-64
SLIDE 64

BGP routes

Edge Fabric's Approach to Control

Additional Inputs

Router

BGP sessions

1

Router selects routes using BGP Edge Fabric selects ideal routes
 using BGP routes + other inputs

2

BGP Edge Fabric

52

Route B Route A

slide-65
SLIDE 65

BGP routes

Edge Fabric's Approach to Control

Additional Inputs

Router

BGP sessions

1

Router selects routes using BGP Edge Fabric selects ideal routes
 using BGP routes + other inputs

2

BGP

3

If router and Edge Fabric choose different routes, override router

Edge Fabric Edge Fabric

Route B

Router BGP

Route A

  • verride

Route B

53

Route B Route A

use Route B

slide-66
SLIDE 66

Types of Edge Fabric Overrides

Edge Fabric can override BGP's decision in order to...

slide-67
SLIDE 67

Types of Edge Fabric Overrides

Peering Transit Before After

203.0.113.0/24

Move traffic for set of end-users

  • verride per <destination>

Edge Fabric can override BGP's decision in order to...

slide-68
SLIDE 68

Types of Edge Fabric Overrides

Peering Transit Before After

203.0.113.0/24

Move traffic for set of end-users

  • verride per <destination>

Peering Transit Before After Low priority traffic

Move class of end-user traffic

  • verride per <destination, traffic class>

(see paper for details)

Edge Fabric can override BGP's decision in order to...

slide-69
SLIDE 69

Example Override: Preventing Congestion

100 Gbps capacity 10 Gbps capacity

Route A

Router

Route B

Tier 1

ISP

0 Gbps load 12 Gbps load BGP's decision

12 Gbps demand

slide-70
SLIDE 70

Example Override: Preventing Congestion

100 Gbps capacity 10 Gbps capacity

Route A

Router

Route B

Tier 1

ISP

0 Gbps load 12 Gbps load BGP's decision

12 Gbps demand

Demand composed of two prefixes:

slide-71
SLIDE 71

Example Override: Preventing Congestion

100 Gbps capacity 10 Gbps capacity

Route A

Router

Route B

Tier 1

ISP

0 Gbps load 12 Gbps load BGP's decision

12 Gbps demand

Demand composed of two prefixes:

198.51.100.0/24 | 9.5 Gbps 203.0.113.0/24 | 2.5 Gbps

slide-72
SLIDE 72

Example Override: Preventing Congestion

100 Gbps capacity 10 Gbps capacity

Route A

Router

Route B

Tier 1

ISP

9.5 Gbps load Edge Fabric

Edge Fabric shifts a prefix's traffic to an alternate link

12 Gbps demand

Demand composed of two prefixes:

198.51.100.0/24 | 9.5 Gbps 203.0.113.0/24 | 2.5 Gbps

slide-73
SLIDE 73

Example Override: Preventing Congestion

100 Gbps capacity 10 Gbps capacity

Route A

Router

Route B

Tier 1

ISP

9.5 Gbps load +2.5 Gbps load Edge Fabric Shifts 203.0.113.0/24

(destination-based override)

12 Gbps demand

Demand composed of two prefixes:

198.51.100.0/24 | 9.5 Gbps 203.0.113.0/24 | 2.5 Gbps

Edge Fabric shifts a prefix's traffic to an alternate link

slide-74
SLIDE 74

inject via BGP

Enacting Overrides at Routers

Transit Route

203.0.113.0/24

selected route

Edge Fabric injects override route via BGP

Edge Router

1

slide-75
SLIDE 75

inject via BGP

Enacting Overrides at Routers

Transit Route

203.0.113.0/24

selected route Edge Router Injected Route

203.0.113.0/24

BGP's selected route BGP ENGINE Edge Fabric injection via BGP

1 2 BGP at routers prefers routes from Edge Fabric Edge Fabric injects override route via BGP

slide-76
SLIDE 76

Enacting Overrides at Routers

Edge Fabric monitors BGP's decisions and overrides them as needed

We gain centralized control over the distributed BGP process without removing BGP from our routers

slide-77
SLIDE 77

Edge Fabric is Flexible

Circuit capacity and traffic rates Route performance measurements BGP routes Policy Path per <destination> Path per <destination, traffic class>

inputs

  • verride granularities

Edge Fabric supports sophisticated traffic engineering policies

slide-78
SLIDE 78

Edge Fabric Meets Our Design Priorities

Operational simplicity

Can fallback to BGP at routers Allows operators to continue to use existing tools Synchronization is only required between Edge Fabric and routers

slide-79
SLIDE 79

Edge Fabric Meets Our Design Priorities

Ease of deployment

BGP sessions with external peers remain at routers
 Uses BGP protocol for injections Uses other industry standards for route and traffic info (BMP/IPFIX/sFlow)

Operational simplicity

Can fallback to BGP at routers Allows operators to continue to use existing tools Synchronization is only required between Edge Fabric and routers

slide-80
SLIDE 80

Evolution and Related Work

Outline

Facebook's Connectivity and Challenges 2 Sidestepping BGP's Limitations with Edge Fabric 3 4 Results from Edge Fabric's Behavior in Production 5 Overview 1

slide-81
SLIDE 81

Edge Fabric entered production in 2013 Objective: Prevent circuit congestion

slide-82
SLIDE 82

Edge Fabric in Production

Edge Routers BMP IPFIX/sFlow

Edge Fabric

Traffic rates BGP routes BGP

Runs per PoP, executes every 30 seconds Controls 100% of Facebook's egress traffic

(see paper for implementation details)

slide-83
SLIDE 83

Target Circuit Utilization To Avoid Congestion

110% if all traffic was placed onto its most preferred path circuit utilization How much traffic should Edge Fabric remove?

slide-84
SLIDE 84

Target Circuit Utilization To Avoid Congestion

110% if all traffic was placed onto its most preferred path 100% packet loss during bursts circuit utilization

slide-85
SLIDE 85

Target Circuit Utilization To Avoid Congestion

110% if all traffic was placed onto its most preferred path 100% packet loss during bursts

poor utilization

50% circuit utilization

slide-86
SLIDE 86

Target Circuit Utilization To Avoid Congestion

110% if all traffic was placed onto its most preferred path 100% packet loss during bursts

poor utilization

50% ~95% high utilization with tolerance for bursts in traffic circuit utilization

slide-87
SLIDE 87

Does Edge Fabric prevent circuit congestion and packet drops?

Evaluating Congestion Avoidance

Key questions:

Does Edge Fabric keep circuit utilization at prescribed threshold?

slide-88
SLIDE 88

During measurement period

Evaluating Congestion Avoidance

Does Edge Fabric prevent circuit congestion and packet drops? When Edge Fabric was shifting traffic away 99.9% of the time, no packet drops

slide-89
SLIDE 89

During measurement period

Evaluating Congestion Avoidance

Does Edge Fabric prevent circuit congestion and packet drops? When Edge Fabric was shifting traffic away 99.9% of the time, no packet drops When Edge Fabric was not active No packet drops

slide-90
SLIDE 90

During measurement period

Evaluating Congestion Avoidance

Does Edge Fabric prevent circuit congestion and packet drops? When Edge Fabric was shifting traffic away 99.9% of the time, no packet drops When Edge Fabric was not active No packet drops

Edge Fabric intervened when needed and prevented circuit congestion

slide-91
SLIDE 91

Evaluating Congestion Avoidance

[Circuit utilization - threshold]
 every 30 seconds for circuits where demand > capacity

Can we keep utilization at the threshold?

slide-92
SLIDE 92

Evaluating Congestion Avoidance

% of samples

10 20 30

[Circuit utilization - threshold]
 every 30 seconds for circuits where demand > capacity

Circuit utilization - threshold

  • 3%

0%

  • 4%
  • 2%
  • 1%

1% 2% 4% 3%

Can we keep utilization at the threshold?

slide-93
SLIDE 93

Evaluating Congestion Avoidance

% of samples

10 20 30

[Circuit utilization - threshold]
 every 30 seconds for circuits where demand > capacity

Ideal value

Circuit utilization - threshold

  • 3%

0%

  • 4%
  • 2%
  • 1%

1% 2% 4% 3%

Can we keep utilization at the threshold?

slide-94
SLIDE 94

Evaluating Congestion Avoidance

% of samples

10 20 30

[Circuit utilization - threshold]
 every 30 seconds for circuits where demand > capacity

Utilization higher than threshold Utilization lower than threshold Ideal value

Circuit utilization - threshold

  • 3%

0%

  • 4%
  • 2%
  • 1%

1% 2% 4% 3%

Can we keep utilization at the threshold?

slide-95
SLIDE 95

Utilization higher than threshold Utilization lower than threshold

  • 3%

0%

  • 4%

Circuit utilization - threshold

Evaluating Congestion Avoidance

% of samples

  • 2%
  • 1%

1% 2% 4% 3% 10

Within 2%

20 30

Threshold Can we keep utilization at the threshold?

slide-96
SLIDE 96

Edge Fabric prevents packet loss while keeping circuit utilization high

Yes. Yes.

Does Edge Fabric prevent circuit congestion and packet drops? Does Edge Fabric keep circuit utilization at prescribed threshold?

slide-97
SLIDE 97

Evolution and Related Work

Outline

Facebook's Connectivity and Challenges 2 Sidestepping BGP's Limitations with Edge Fabric 3 4 Results from Edge Fabric's Behavior in Production 5 Overview 1

slide-98
SLIDE 98

Initially: Host-based routing

Overrides enacted by hosts Hosts signal egress path per packet

Evolution: Enacting Decisions


 decisions servers Edge Fabric routers

via MPLS/DSCP/GRE

"send via circuit X" Packet

X

slide-99
SLIDE 99

Initially: Host-based routing

Overrides enacted by hosts Hosts signal egress path per packet

Evolution: Enacting Decisions


 decisions servers Edge Fabric routers

via MPLS/DSCP/GRE

"send via circuit X" Packet

Today: Edge-based routing

Overrides enacted by routers at edge Hosts signal priority per packet X 
 decisions servers Edge Fabric routers

via DSCP

"video traffic" Packet

slide-100
SLIDE 100

Before: Host-based routing

Evolution: Enacting Decisions

Today: Edge-based routing

Both provide the capabilities we want today

Preventing congestion, incorporating advanced policy, application-specific and performance-aware routing

slide-101
SLIDE 101

Before: Host-based routing

Evolution: Enacting Decisions

Today: Edge-based routing

Both provide the capabilities we want today

Preventing congestion, incorporating advanced policy, application-specific and performance-aware routing Operational simplicity Ease of deployment

Edge-based is best aligned with our design priorities

slide-102
SLIDE 102

Edge Fabric and Google's Espresso

slide-103
SLIDE 103

Edge Fabric and Google's Espresso

use BGP to exchange routes with peers

Both systems

slide-104
SLIDE 104

Edge Fabric and Google's Espresso

focus on centralizing control and incorporating additional inputs use BGP to exchange routes with peers

Both systems

slide-105
SLIDE 105

centralize control and incorporate additional inputs

Google's Espresso

use BGP to exchange routes with peers

Facebook's Edge Fabric

design priorities

Operational simplicity Ease of deployment

slide-106
SLIDE 106

centralize control and incorporate additional inputs

Google's Espresso

use BGP to exchange routes with peers

Facebook's Edge Fabric

design priorities

Maximum flexibility Cost savings Operational simplicity Ease of deployment

slide-107
SLIDE 107

centralize control and incorporate additional inputs

Google's Espresso

use BGP to exchange routes with peers

edge device

router MPLS switch

enacts decisions via role of hosts decision granularity

Facebook's Edge Fabric

routing options design priorities

Maximum flexibility Cost savings Operational simplicity Ease of deployment

slide-108
SLIDE 108

centralize control and incorporate additional inputs

Google's Espresso

use BGP to exchange routes with peers

edge device

router MPLS switch

enacts decisions via role of hosts decision granularity

Facebook's Edge Fabric

routing options

BGP injections to routers host-based overrides

design priorities

Maximum flexibility Cost savings Operational simplicity Ease of deployment

slide-109
SLIDE 109

centralize control and incorporate additional inputs

Google's Espresso

use BGP to exchange routes with peers

edge device

router MPLS switch

enacts decisions via role of hosts decision granularity

Facebook's Edge Fabric

routing options

BGP injections to routers host-based overrides mark packet's priority select packet's route

design priorities

Maximum flexibility Cost savings Operational simplicity Ease of deployment

slide-110
SLIDE 110

centralize control and incorporate additional inputs

Google's Espresso

use BGP to exchange routes with peers

edge device

router MPLS switch

enacts decisions via role of hosts decision granularity

Facebook's Edge Fabric

routing options

BGP injections to routers host-based overrides mark packet's priority select packet's route <destination, priority/class> packet

design priorities

Maximum flexibility Cost savings Operational simplicity Ease of deployment

slide-111
SLIDE 111

centralize control and incorporate additional inputs

Google's Espresso

use BGP to exchange routes with peers

edge device

router MPLS switch

enacts decisions via role of hosts decision granularity

Facebook's Edge Fabric

routing options

BGP injections to routers host-based overrides mark packet's priority select packet's route <destination, priority/class> packet

design priorities

Maximum flexibility Cost savings Operational simplicity Ease of deployment

per-PoP global

slide-112
SLIDE 112

BGP does not consider demand, capacity or performance

Problem has been around for a decade.

slide-113
SLIDE 113

BGP does not consider demand, capacity or performance

Problem has been around for a decade.

Scale of connectivity, traffic, and QoS demands brings new challenges and opportunities

slide-114
SLIDE 114

Conclusion

Benefits of Rich Interconnection

slide-115
SLIDE 115

Conclusion

  • bjective

deliver traffic with the best performance possible

slide-116
SLIDE 116

Conclusion

challenge

BGP does not consider demand, capacity or performance

  • bjective

deliver traffic with the best performance possible

slide-117
SLIDE 117

Conclusion

  • bjective

deliver traffic with the best performance possible

challenge

BGP does not consider demand, capacity or performance

With Edge Fabric, we sidestep BGP's limitations

by shifting control from routers to software

result

more efficient network, better performance for our users

slide-118
SLIDE 118

103