Jellyfish networking data centers randomly Brighten Godfrey UIUC - - PowerPoint PPT Presentation

jellyfish
SMART_READER_LITE
LIVE PREVIEW

Jellyfish networking data centers randomly Brighten Godfrey UIUC - - PowerPoint PPT Presentation

Jellyfish networking data centers randomly Brighten Godfrey UIUC Cisco Systems, September 12, 2013 [Photo: Kevin Raskoff] Ask me about... Low latency networked systems Data plane verification (Veriflow) Ankit Singla UIUC Chi-Yao


slide-1
SLIDE 1

[Photo: Kevin Raskoff]

Jellyfish

networking

data centers

randomly

Brighten Godfrey • UIUC

Cisco Systems, September 12, 2013

slide-2
SLIDE 2

Ask me about...

Low latency networked systems Data plane verification (Veriflow)

slide-3
SLIDE 3

Ankit Singla

UIUC

Chi-Yao Hong

UIUC

Kyle Jao

UIUC

Sangeetha Abdu Jyothi

UIUC

slide-4
SLIDE 4

Ankit Singla

UIUC

Chi-Yao Hong

UIUC

Kyle Jao

UIUC

Lucian Popa

HP Labs

Alexandra Kolla

UIUC

Sangeetha Abdu Jyothi

UIUC

slide-5
SLIDE 5

The need for throughput

March 2011 May 2012

[Facebook, via Wired]

slide-6
SLIDE 6

Difficult goals

High throughput with minimal cost Support big data analytics Agile placement of VMs Flexible incremental expandability Easily add/replace servers & switches

slide-7
SLIDE 7

Incremental expansion

Facebook “adding capacity on a daily basis” Reduces up-front capital expenditure Commercial products expand servers but not the net

  • SGI Ice Cube (“Expandable Modular Data Center”)
  • HP EcoPod (“Pay-as-you-grow”)

2007 10 08 09

slide-8
SLIDE 8

Today’s structured networks

and Figure 2: The conventional network architecture for

[Greenberg et al, CCR Jan. 2009]

slide-9
SLIDE 9

Today’s structured networks

and Figure 2: The conventional network architecture for

[Greenberg et al, CCR Jan. 2009]

slide-10
SLIDE 10

Today’s structured networks

Fat tree

[Al-Fares, Loukissas, Vahdat, SIGCOMM ’08]

slide-11
SLIDE 11

Today’s structured networks

Fat tree

[Al-Fares, Loukissas, Vahdat, SIGCOMM ’08]

Pod 0

10.0.2.1 10.0.1.1

Pod 1 Pod 3 Pod 2

10.2.0.2 10.2.0.3 10.2.0.1 10.4.1.1 10.4.1.2 10.4.2.1 10.4.2.2

Core

10.2.2.1 10.0.1.2

Edge Aggregation

slide-12
SLIDE 12

Today’s structured networks

Fat tree

slide-13
SLIDE 13

Structure constrains expansion

Coarse design points

  • Hypercube: 2k switches
  • de Bruijn-like: 3k switches
  • 3-level fat tree: 5k2/4 switches

Fat trees by the numbers:

  • (3-level, with commodity 24, 32, 48, ... port switches)
  • 3456 servers, 8192 servers, 27648 servers, ...

Unclear how to maintain structure incrementally

  • Overutilize switches? Uneven / constrained bandwidth
  • Leave ports free for later? Wasted investment
slide-14
SLIDE 14

Our Solution

Forget about structure – let’s have no structure at all!

slide-15
SLIDE 15

Jellyfish: The Topology

slide-16
SLIDE 16

Jellyfish: The Topology

Servers connected to top-of-rack switch Switches form uniform-random interconnections

slide-17
SLIDE 17

Capacity as a fluid

Jellyfish random graph

432 servers, 180 switches, degree 12

slide-18
SLIDE 18

Capacity as a fluid

Jellyfish random graph

432 servers, 180 switches, degree 12

Jellyfish

Crossota norvegica Photo: Kevin Raskoff

slide-19
SLIDE 19

Construction & Expansion

slide-20
SLIDE 20

Building Jellyfish

slide-21
SLIDE 21

Building Jellyfish

X

slide-22
SLIDE 22

Building Jellyfish

X X Same procedure for initial construction and incremental expansion Can flexibly incorporate any type of equipment

slide-23
SLIDE 23

Building Jellyfish

60% cheaper incremental expansion

compared with past technique for traditional networks

LEGUP: [Curtis, Keshav, Lopez-Ortiz, CoNEXT’10]

slide-24
SLIDE 24

Throughput

By giving up on structure, do we take a hit on throughput?

slide-25
SLIDE 25

Throughput: Jellyfish vs. fat tree

                   

} +25%

more servers

slide-26
SLIDE 26

The VL2 topology

. . . . . .

  • . . .

. . . .

  • DA/2 x 10G

DA/2 x 10G DI x10G 2 x10G DADI/4 x ToR Switches DI x Aggregate Switches 20(DADI/4) x Servers

Internet

Link-state network carrying only LAs (e.g., 10/8)

DA/2 x Intermediate Switches

Fungible pool of servers owning AAs (e.g., 20/8)

Figure : An example Clos network between Aggregation and

[Greenburg, Hamilton, Jain, Kandula, Kim, Lahiri, Maltz, Patel, Sengupta, SIGCOMM’09]

slide-27
SLIDE 27

Rewiring VL2

. . . . . .

  • . . .

. . . .

  • DA/2 x 10G

DA/2 x 10G DI x10G 2 x10G DADI/4 x ToR Switches DI x Aggregate Switches 20(DADI/4) x Servers

Internet

Link-state network carrying only LAs (e.g., 10/8)

DA/2 x Intermediate Switches

Fungible pool of servers owning AAs (e.g., 20/8)

Figure : An example Clos network between Aggregation and

Uniform-random interconnection

}

Connect ToRs proportional to Intermediate/Agg degree Servers unchanged (only ToRs have 1 Gbps ports)

slide-28
SLIDE 28

Rewiring VL2

0.95 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 6 8 10 12 14 16 18 20 Servers at Full Throughput (Ratio Over VL2) Aggregation Switch Degree

40% more servers

with server-to-server random permutation traffic

slide-29
SLIDE 29

Rewiring VL2

0.95 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 6 8 10 12 14 16 18 20 Servers at Full Throughput (Ratio Over VL2) Aggregation Switch Degree rack-to-rack

40% more servers

with server-to-server random permutation traffic

slide-30
SLIDE 30

Rewiring VL2

0.95 1 1.05 1.1 1.15 1.2 1.25 1.3 1.35 1.4 6 8 10 12 14 16 18 20 Servers at Full Throughput (Ratio Over VL2) Aggregation Switch Degree all-to-all rack-to-rack

40% more servers

with server-to-server random permutation traffic

slide-31
SLIDE 31

Just the beginning

slide-32
SLIDE 32

Just the beginning

Topology design

  • How close are random graphs to optimal?
  • What if switches are heterogeneous?

System design (or: “But what about...”)

  • Performance consistency?
  • Cabling spaghetti?
  • Routing and congestion control without structure?
slide-33
SLIDE 33

Just the beginning

Topology design

  • How close are random graphs to optimal?
  • What if switches are heterogeneous?

System design (or: “But what about...”)

  • Performance consistency?
  • Cabling spaghetti?
  • Routing and congestion control without structure?
slide-34
SLIDE 34

Topology Design in Context

slide-35
SLIDE 35

“ ”

It is anticipated that the whole of the populous parts of the United States will, within two or three years, be covered with net- work like a spider's web.

slide-36
SLIDE 36

–– The London Anecdotes, 1848

“ ”

It is anticipated that the whole of the populous parts of the United States will, within two or three years, be covered with net- work like a spider's web.

slide-37
SLIDE 37

Western Electric crossbar switch

[Photo: Wikipedia user Yeatesh]

slide-38
SLIDE 38
slide-39
SLIDE 39
slide-40
SLIDE 40

[Benes network: Wikipedia user Piggly]

slide-41
SLIDE 41
slide-42
SLIDE 42
slide-43
SLIDE 43

What’s different about data centers

Flexible forwarding (compared with supercomputers) Flexible routing & congestion control (especially with software-defined networking)

slide-44
SLIDE 44

Understanding Throughput

slide-45
SLIDE 45

Throughput: Jellyfish vs. fat tree

                   

} +25%

more servers

slide-46
SLIDE 46

Intuition

# 1 Gbps flows total capacity used capacity per flow = if we fully utilize all available capacity ...

slide-47
SLIDE 47

Intuition

# 1 Gbps flows ∑links capacity(link) used capacity per flow = if we fully utilize all available capacity ...

slide-48
SLIDE 48

Intuition

# 1 Gbps flows ∑links capacity(link) 1 Gbps • mean path length = if we fully utilize all available capacity ...

slide-49
SLIDE 49

Intuition

# 1 Gbps flows ∑links capacity(link) 1 Gbps • mean path length = if we fully utilize all available capacity ...

Mission: minimize average path length

slide-50
SLIDE 50

Example

Fat tree

432 servers, 180 switches, degree 12

Jellyfish random graph

432 servers, 180 switches, degree 12

slide-51
SLIDE 51

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

slide-52
SLIDE 52

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin
slide-53
SLIDE 53

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin
slide-54
SLIDE 54

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin
slide-55
SLIDE 55

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin
slide-56
SLIDE 56

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin
slide-57
SLIDE 57

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin
slide-58
SLIDE 58

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin

4 of 16

reachable in ≤ 5 hops

12 of 16

reachable in ≤ 5 hops (good expander)

slide-59
SLIDE 59

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin

12 of 16

reachable in ≤ 5 hops (good expander)

slide-60
SLIDE 60

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin

12 of 16

reachable in ≤ 5 hops (good expander)

slide-61
SLIDE 61

Example

Fat tree

16 servers, 20 switches, degree 4

Jellyfish random graph

16 servers, 20 switches, degree 4

  • rigin
  • rigin

12 of 16

reachable in ≤ 5 hops (good expander)

slide-62
SLIDE 62

Jellyfish has short paths

                  

Fat-tree with 686 servers

slide-63
SLIDE 63

Jellyfish has short paths

                   

Jellyfish, same equipment

slide-64
SLIDE 64

System Design: Performance Consistency

slide-65
SLIDE 65

Is performance more variable?

Performance depends on choice of random graph

  • if you expand the network, would performance change

dramatically?

Extreme case: graph could be disconnected!

  • never happens, with high probability
slide-66
SLIDE 66

Little variation if size is moderate

                      

{min, avg, max} of 20 trials shown

slide-67
SLIDE 67

System Design: Routing

slide-68
SLIDE 68

Routing

Intuition

# 1 Gbps flows total capacity used capacity per flow = if we fully utilize all available capacity ...

if

How do we effectively utilize capacity without structure?

slide-69
SLIDE 69

Routing without structure

In theory, just a multicommodity flow (MCF) problem Potential issues:

  • Solve MCF using a distributed protocol?
  • Optimal solution could have too many small subflows
slide-70
SLIDE 70

Routing

Does ECMP work?

  • No
  • ECMP doesn’t use Jellyfish’s path diversity

                     

slide-71
SLIDE 71

Routing: a simple solution

Find k shortest paths Let Multipath TCP do the rest

  • [Wischik, Raiciu, Greenhalgh, Handley, NSDI’10]

86-90% of

  • ptimal

(TCP is within 3 percentage points of MPTCP)

0.2 0.4 0.6 0.8 1 70 165 335 600 960 Normalized Throughput #Servers

Optimal Packet level simulation

slide-72
SLIDE 72

Throughput: Jellyfish vs. fat tree

8-shortest paths + MPTCP

                   

} +25%

more servers

slide-73
SLIDE 73

Deploying k-shortest paths

Multiple options:

  • SPAIN [Mudigonda,

Yalagandula, Al-Fares, Mogul, NSDI’ 10]

  • Equal-cost MPLS tunnels
  • IBM Research’s SPARTA [CoNEXT 2012]
  • SDN controller based methods
slide-74
SLIDE 74

System Design: Cabling

slide-75
SLIDE 75

Cabling

slide-76
SLIDE 76

Cabling

[Photo: Javier Lastras / Wikimedia]
slide-77
SLIDE 77

Cluster of switches Rack of servers Aggregate cable new rack X cluster A cluster B

Aggregate bundles

Cabling solutions

Fewer cables for same # servers as fat tree Generic optimization: Place all switches centrally

slide-78
SLIDE 78

Interconnecting clusters

How many “long” cables do we need?

slide-79
SLIDE 79

0.1 0.2 0.3 0.4 0.5 0.6 0.5 1 1.5 2 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

Interconnecting clusters

?

slide-80
SLIDE 80

Interconnecting clusters

0.1 0.2 0.3 0.4 0.5 0.6 0.5 1 1.5 2 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

slide-81
SLIDE 81

Intuition

slide-82
SLIDE 82

Intuition

slide-83
SLIDE 83

Intuition

Still need one crossing!

Θ ✓ 1 APL ◆ Throughput should drop when less than

  • f total capacity

crosses the cut!

slide-84
SLIDE 84

Explaining throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

slide-85
SLIDE 85

Explaining throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

Upper bounds... And constant-factor matching lower bounds in special case.

slide-86
SLIDE 86

Two regimes of throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

sparsest cut “plateau”: (total cap) / APL

slide-87
SLIDE 87

Two regimes of throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

Bisection bandwidth is poor predictor of performance!

sparsest cut “plateau”: (total cap) / APL

Cables can be localized High-capacity switches needn’t be clustered

slide-88
SLIDE 88

What’s Next

slide-89
SLIDE 89
slide-90
SLIDE 90
slide-91
SLIDE 91
slide-92
SLIDE 92

Research agenda

Prototype in the lab

  • High throughput routing even in unstructured networks
  • New techniques for near-optimal TE applicable generally
  • SDN-based implementation

Topology-aware application & VM placement Tech transfer

slide-93
SLIDE 93

“Networking Data Centers Randomly”

  • A. Singla, C. Hong, L. Popa, P

. B. Godfrey NSDI 2012

For more...

“High throughput data center topology design”

  • A. Singla, P

. B. Godfrey, A. Kolla Manuscript (check arxiv soon!)

slide-94
SLIDE 94

Conclusion

High throughput Expandability

slide-95
SLIDE 95

[Photo: Kevin Raskoff]

slide-96
SLIDE 96

Backup Slides

slide-97
SLIDE 97

Hypercube vs. Random Graph

slide-98
SLIDE 98

Is Jellyfish’s advantage just that it’s a “direct” network?

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 1 2 3 4 5 6 7 8 Relative Throughput Hypercube-n Hypercube_1serv

Answer: No

256 switches 8 switches 64 128

slide-99
SLIDE 99

Are There Even Better Topologies?

slide-100
SLIDE 100

A simple upper bound

Throughput per flow ∑links capacity(link) # flows • mean path length

Lower bound this!

slide-101
SLIDE 101

Lower bound on mean path length

Distance # Nodes 1 2

6 62 - 6

Ugliness omitted!

[Cerf et al., “A lower bound on the average shortest path length in regular graphs”, 1974]

slide-102
SLIDE 102

Random graphs vs. upper bound

0.2 0.4 0.6 0.8 1 50 100 150 200 Throughput (Ratio to Upper-bound) Network Size

slide-103
SLIDE 103

Random graphs vs. upper bound

0.2 0.4 0.6 0.8 1 50 100 150 200 Throughput (Ratio to Upper-bound) Network Size 5 servers per switch, random permutation traffic

slide-104
SLIDE 104

Random graphs vs. upper bound

0.2 0.4 0.6 0.8 1 50 100 150 200 Throughput (Ratio to Upper-bound) Network Size 10 servers 5 servers per switch, random permutation traffic

slide-105
SLIDE 105

Random graphs vs. upper bound

0.2 0.4 0.6 0.8 1 50 100 150 200 Throughput (Ratio to Upper-bound) Network Size

(Aside: is any topology closer to the bound?)

10 servers 5 servers per switch, random permutation traffic all-to-all

Random graphs within a few percent of optimal!

slide-106
SLIDE 106

Random graphs vs. upper bound

1 1.02 1.04 1.06 1.08 1.1 1.12 1.14 1.16 1.18 50 100 150 200 Path Length (Ratio to Lower-Bound) Network Size

slide-107
SLIDE 107

Random graphs vs. upper bound

1 1.02 1.04 1.06 1.08 1.1 1.12 1.14 1.16 1.18 50 100 150 200 Path Length (Ratio to Lower-Bound) Network Size

slide-108
SLIDE 108

Designing Heterogeneous Networks

slide-109
SLIDE 109

Random graphs as a building block

Low-degree switches High-degree switches Servers ? ? ?

1

How should we distribute servers?

2

How should we interconnect switches?

What would you do?

slide-110
SLIDE 110

Distributing servers

(The switch interconnect being vanilla random)

0.2 0.4 0.6 0.8 1 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Normalized Throughput Number of Servers at Large Switches (Ratio to Expected Under Random Distribution)

slide-111
SLIDE 111

Distributing servers

0.2 0.4 0.6 0.8 1 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 Normalized Throughput Number of Servers at Large Switches (Ratio to Expected Under Random Distribution)

Distributing servers in proportion to switch port-counts (The switch interconnect being vanilla random)

slide-112
SLIDE 112

Distributing servers

0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 Normalized Throughput

  • Distributing servers in proportion

to switch port-counts #Servers on switch i (port-count of i)β

slide-113
SLIDE 113

Random graphs as a building block

Low-degree switches High-degree switches Servers ? ? ?

1

How should we distribute servers?

2

How should we interconnect switches?

What would you do?

slide-114
SLIDE 114

Interconnecting switches

slide-115
SLIDE 115

0.1 0.2 0.3 0.4 0.5 0.6 0.5 1 1.5 2 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

Interconnecting switches

?

slide-116
SLIDE 116

Interconnecting switches

0.1 0.2 0.3 0.4 0.5 0.6 0.5 1 1.5 2 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

slide-117
SLIDE 117

Intuition

slide-118
SLIDE 118

Intuition

slide-119
SLIDE 119

Intuition

Still need one crossing!

Θ ✓ 1 APL ◆ Throughput should drop when less than

  • f total capacity

crosses the cut!

slide-120
SLIDE 120

Explaining throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

slide-121
SLIDE 121

Explaining throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

Upper bounds... And constant-factor matching lower bounds in special case.

slide-122
SLIDE 122

Two regimes of throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

sparsest cut “plateau”: (total cap) / APL

slide-123
SLIDE 123

Two regimes of throughput

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Normalized Throughput Cross-cluster Links (Ratio to Expected Under Random Connection)

Bisection bandwidth is poor predictor of performance!

sparsest cut “plateau”: (total cap) / APL

Cables can be localized High-capacity switches needn’t be clustered

slide-124
SLIDE 124

Quantifying Expandability

slide-125
SLIDE 125

Quantifying expandability

LEGUP: [Curtis, Keshav, Lopez-Ortiz, CoNEXT’10]

                 

LEGUP

slide-126
SLIDE 126

Quantifying expandability

60% cheaper

                 

LEGUP

Jellyfish

LEGUP: [Curtis, Keshav, Lopez-Ortiz, CoNEXT’10]

slide-127
SLIDE 127

Failure Resilience

slide-128
SLIDE 128

Throughput under link failures

                  

slide-129
SLIDE 129

Throughput under link failures

                  

Turritopsis Nutricula?

slide-130
SLIDE 130

Beyond Random Graphs

slide-131
SLIDE 131

Can we do even better?

What is the maximum number of nodes in any graph with degree ∂ and diameter d?

slide-132
SLIDE 132

Can we do even better?

What is the maximum number of nodes in any graph with degree 3 and diameter 2? Peterson graph

slide-133
SLIDE 133

LARGEST KNOWN (Δ,D)-GRAPHS. June 2010. D \ D 2 3 4 5 6 7 8 9 10 3 10 20 38 70 132 196 336 600 1 250 4 15 41 98 364 740 1 320 3 243 7 575 17 703 5 24 72 212 624 2 772 5 516 17 030 53 352 164 720 6 32 111 390 1 404 7 917 19 282 75 157 295 025 1 212 117 7 50 168 672 2 756 11 988 52 768 233 700 1 124 990 5 311 572 8 57 253 1 100 5 060 39 672 130 017 714 010 4 039 704 17 823 532 9 74 585 1 550 8 200 75 893 270 192 1 485 498 10 423 212 31 466 244 10 91 650 2 223 13 140 134 690 561 957 4 019 736 17 304 400 104 058 822 11 104 715 3 200 18 700 156 864 971 028 5 941 864 62 932 488 250 108 668 12 133 786 4 680 29 470 359 772 1 900 464 10 423 212 104 058 822 600 105 100 13 162 851 6 560 39 576 531 440 2 901 404 17 823 532 180 002 472 1 050 104 118 14 183 916 8 200 56 790 816 294 6 200 460 41 894 424 450 103 771 2 050 103 984 15 186 1 215 11 712 74 298 1 417 248 8 079 298 90 001 236 900 207 542 4 149 702 144 16 198 1 600 14 640 132 496 1 771 560 14 882 658 104 518 518 1 400 103 920 7 394 669 856

[Delorme & Comellas: http://www-mat.upc.es/grup_de_grafs/table_g.html/ ]

Diameter Degree

Degree-diameter problem

slide-134
SLIDE 134

Degree-diameter problem

Do the best known degree-diameter graphs also work well for high throughput?

slide-135
SLIDE 135

Degree-diameter vs. Jellyfish

D-D graphs do have high throughput Jellyfish within 9%!

                  Switches: Total ports: Net-ports:

slide-136
SLIDE 136

Random graphs vs. upper bound for fixed size and increasing degree

slide-137
SLIDE 137

Random graphs vs. upper bound

0.2 0.4 0.6 0.8 1 5 10 15 20 25 30 35 Throughput (Ratio to Upper-bound) Network Degree

slide-138
SLIDE 138

Random graphs vs. upper bound

0.2 0.4 0.6 0.8 1 5 10 15 20 25 30 35 Throughput (Ratio to Upper-bound) Network Degree

slide-139
SLIDE 139

Random graphs vs. upper bound

0.2 0.4 0.6 0.8 1 5 10 15 20 25 30 35 Throughput (Ratio to Upper-bound) Network Degree

slide-140
SLIDE 140

Random graphs vs. upper bound

1 1.02 1.04 1.06 1.08 1.1 1.12 1.14 1.16 5 10 15 20 25 30 35 Path-length (Ratio to Lower-bound) Network Degree

slide-141
SLIDE 141

Random graphs vs. upper bound

1 1.02 1.04 1.06 1.08 1.1 1.12 1.14 1.16 5 10 15 20 25 30 35 Path-length (Ratio to Lower-bound) Network Degree