H3C SR8800 Series 10G Core Router Technology Analysis Date - - PowerPoint PPT Presentation

h3c sr8800 series 10g core router technology analysis
SMART_READER_LITE
LIVE PREVIEW

H3C SR8800 Series 10G Core Router Technology Analysis Date - - PowerPoint PPT Presentation

H3C SR8800 Series 10G Core Router Technology Analysis Date 2008-3-23 Secret level Public Hangzhou H3C Technologies Co., Ltd. Catalog Core Router Development Tendency H3C SR8800 Overview H3C SR8800 Technical Characteristics


slide-1
SLIDE 1

H3C SR8800 Series 10G Core Router Technology Analysis

Date:2008-3-23 Secret level:Public Hangzhou H3C Technologies Co., Ltd.

slide-2
SLIDE 2

www.h3c.com

Catalog

 Core Router Development Tendency  H3C SR8800 Overview  H3C SR8800 Technical Characteristics  H3C SR8800 Networking  H3C SR8800 Applications

slide-3
SLIDE 3

www.h3c.com

 Informationalized

basis platform

 Cover all enterprises  Raise working

efficiency

 Raise enterprise

competitiveness

 Advanced products and technologies  High expandability  Satisfy the needs for the development of

several years in the future

 Reliable topologies  Reliable devices  Reliable links  High quality network  Voice without delay  Fluent video image quality  Logical separation of different

services

 Defense against various

attacks

Quality Basis Reliable

 Localization services of the

  • riginal vendor

 Fast on-site support of the

  • riginal vendor

Service Advance Secure

Telecomm unications Data Network

Data Network Demand Analysis

slide-4
SLIDE 4

www.h3c.com

Service Access Core

X.25 ADSL Ethernet PSTN IP ATM FR GSM/GPRS CDMA Cable PDH SDH Wireless Voice Wireless Data High Speed Internet Voice Streaming Dial-up VoIP Message

Today

Wireless DSL FTTP/HFC 3G RAN

IP / MPLS Network

Location & Presence Message Online Gaming Voice Data Video Storage Directory

Tomorrow  Independent network mode results in high network investment, difficult operating and maintenance tasks, and low capability in supplying new services.  Different networks integrating with the IP network is a certain trend, and multiple services will be born on a consistent IP/MPLS network.  Carriers from all over the world are building new generation IP multi-services carrier networks

Integration of networks brings integration of services, integration of applications, and more

  • pportunities, therefore bringing more benefits.

Multi-Services IP Bearer Network

Tendency for IP Multi-Services Bearer Network

slide-5
SLIDE 5

www.h3c.com

High Performance, extensibility and service integration

Data sharing

Internet and broadband Performance, Expandability and Services

1990s Today 2000

  • High-density narrowband aggregation → Broadband and

narrowband aggregation → High capacity broadband and narrowband aggregation with services

  • Best effort → Carrier-class reliability of devices →

Carrier-class quality of services

  • Data and the Internet → Integration of

the three networks →Unified communication

  • Standardized services

→User-defined services

Connection Performance Services Applications

Network Application Development Tendency

slide-6
SLIDE 6

www.h3c.com

Developed for Industry Users --- H3C SR8800

High security Protection for both devices and services in user network against attacks Advanced system architecture Distributed 10G NP-based hardware platform, and flexible service expansion capability and high processing performance High reliability Specific design for device reliability and network reliability, providing carrier-class reliability Granular QoS H-QoS provides three-level queue scheduling, providing granular SLA for different users and services.

slide-7
SLIDE 7

www.h3c.com

Catalog

 Core Router Development Tendency  H3C SR8800 Overview  H3C SR8800 Technical Characteristics  H3C SR8800 Networking  H3C SR8800 Applications

slide-8
SLIDE 8

www.h3c.com

10G 2.5G GE 100M

SR6602 SR6608 MSR 20 MSR 30 MSR 50

Orientation of H3C SR8800 Products

SR8802 SR8805 SR8808 SR8812

10G Core Routers

slide-9
SLIDE 9

www.h3c.com

H3C SR8800 Family

 Developed by H3C, H3C SR8800 series 10G core routers are the flagship products of our router family.  H3C SR8800 is designed to operate on IP backbone network, core and distribution layers of IP dedicated network, POP, and the distribution layer of carrier network.  H3C SR8800 falls into four models as the following: SR8802/SR8805/SR8808/SR8812

H3C SR8800 Series 10G Core Routers

Attribute SR8802 SR8805 SR8808 SR8812 Number of slots on the SRPU 1 / 2 2 2 2 Number of slots on the LPU 3 / 2 5 8 12 Engine switching capacity 240G 720G/1.44T Forwarding performance 146Mpps 586Mpps

slide-10
SLIDE 10

www.h3c.com

 Core Router Development Tendency  H3C SR8800 Overview  H3C SR8800 Technical Characteristics

  • System Architecture
  • Service Capability
  • High Reliability
  • High Security

 H3C SR8800 Networking  H3C SR8800 Applications

Catalo Catalog

slide-11
SLIDE 11

www.h3c.com

Architecture-Distributed 10G NP Hardware Platform

LPU N NP service engine PIC

Table Lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table Lookup Engine QoS engine

LPU 0 NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Incoming Packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache Incoming packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache

Routing engine Routing engine OAM engine OAM engine

 Based on the distributed 10GNP hardware platform, the SR8800 has excellent software upgrade, new service scalability and service processing capabilities.

Distributed 10G NP

slide-12
SLIDE 12

www.h3c.com

Architecture-Unique Three-Engine Forwarding Structure

 To improve the performance of core routers, some main issues need to be solved, including the ever-growing services, high QoS needs, and processing time and resources consumed by entry lookup and QoS scheduling.  Because the entry lookup and QoS scheduling demands and the corresponding models are stable, using the ASIC technology for entry lookup and QoS scheduling can improve router performance significantly.  Integrated with the high performance of ASIC and flexible scalability of NP, the SR8800 adopts a three-engine forwarding structure, namely, NP service engine, QoS engine and entry lookup engine. The service engine uses the NP technology to support flexible service scalability and upgrade. The QoS engine and entry lookup engine use the ASIC technology to support high-performance QoS and entry lookup capabilities.

LPU N NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table lookup engine QoS engine

LPU 0 NP service engine PIC

Table Lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer

Routing engine Routing engine OAM engine OAM engine

NP supports all-service distributed processing with high performance and flexible service scalability Line-speed entry lookup Line-speed flow management

slide-13
SLIDE 13

www.h3c.com

Architecture-High Capacity and Performance

 An SRPU has two crossbars embedded and is enough to ensure normal operation by itselt, while double SRPUs

  • ffer hot-standby 1+1 redundancy and support 1.44T switching capabilities, fully meeting the switching capability

requirements for core routers.  The fabric adapter and crossbar work together to complete VoQ and E2E flow control and implement granular switch-fabric-level QoS, offering genuine SLA services to customers.

LPU N NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table lookup engine QoS engine

LPU 0 NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer Ingress buffer Egress buffer

Routing engine Routing engine OAM engine OAM engine

Fabric adapter and crossbar work together to complete VoQ and E2E flow control and to support granular switch-fabric- level QoS. An SRPU has two crossbars embedded to save slot space, and is enough to ensure normal operation, while double SRPUs offer hot-standby 1+1 redundancy and support 1.44T switching capabilities.

slide-14
SLIDE 14

www.h3c.com

Architecture-Unique Link Fault OAM Design

 Traditional routers use the CPU on the SRPU for fault detection, and the generation and forwarding of link detection protocol packets. If there are many services running, the CPU will be too busy to generate and send link detection packets timely, resulting detection faults and network oscillation, and therefore, 50ms fault location and service switchover cannot be realized.  With the distributed OAM architecture, each LPU of the SR8800 uses a dedicated OAM engine for link fault detection, which reduces CPU loads and improves link fault detection performance and CPU security, realizing 30ms fault location and 20ms service switchover.

LPU N NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table lookup engine QoS engine

LPU 0 NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Incoming Packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache Incoming packets cache Outgoing packets cache Incoming Packets cache Outgoing packets cache

Routing engine Routing engine OAM engine OAM engine

Dedicated OAM engine supports 30ms fault location and 20ms service switchover

slide-15
SLIDE 15

www.h3c.com

Architecture-High Capacity Buffers

 Delay is the round trip time of a packet between two nodes. All real-time services are delay-sensitive, such as VoIP, which needs a delay less than 200ms; otherwise, voice quality is unacceptable.  When the delay of a router is smaller than 200ms, packet loss occurs during network congestion, which cannot offer high-quality QoS. However, when the delay is larger than 200ms, services like VoIP cannot work normally. Therefore, a delay of 200ms is necessary for core routers.  Each NP of the SR8800 offers a 200ms ingress buffer and a 200ms egress buffer.

LPU N NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table lookup engine QoS engine

LPU 0 NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer

Routing engine Routing engine OAM engine OAM engine High-capacity buffer to handle burst traffic

slide-16
SLIDE 16

www.h3c.com

Architecture-High-Performance Routing Engine

 The SR8800 is equipped with high-performance CPUs as the routing engines, improving route calculation capabilities significantly.  Each SRPU of the SR8800 has double crossbars and three-level clocks embedded to ensure non-blocking switching, provide WAN clock, and save equipment room space and overall system power consumption, making the SR8800 compact and energy-saving core routers for customers.

LPU N NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table lookup engine QoS engine

LPU 0 NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer

Routing engine Routing engine OAM engine OAM engine

High-performance routing engines improve route calculation performance significantly

slide-17
SLIDE 17

www.h3c.com

High-Performance SRPU

Crossbar Crossbar Three-level clock chip High-performance CPU SDRAM CF card slot Three-level clock interface USB interface  The high-performance SRPU is the core of the SR8800. It provides powerful routing capabilities, a variety of storage modes using the CF card and USB interface, and precise three-level clock sources.

slide-18
SLIDE 18

www.h3c.com

Architecture-Separate Base Board and Interface Card

 Interface cards are separate from base boards to support flexible service configurations. The base boards support all services, and interface cards provide various types of interfaces, allowing for flexible configurations in different network environments.  This design can protect customer investments to the maximum. For example, if you want to upgrade POS interfaces from 155M to 2.5G, you only need to change the interface cards without the need of purchasing LPUs.

LPU N NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table lookup engine QoS engine

LPU 0 NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer

Routing engine Routing engine OAM engine OAM engine

Interface cards are separate from base boards to offer flexible service configurations

slide-19
SLIDE 19

www.h3c.com Base board with no interface card installed Base board with Interface card installed

Full-Service Base Board

Buffer NP service engine

Table lookup engine Table lookup engine

OAM engine (under CPU) CPU slot QoS engine

 The full-service NP base board supports services like IPv4, IPv6, MPLS VPN, QoS/H-QoS, GRE, and multicast VPN.

slide-20
SLIDE 20

www.h3c.com POS 155M interface card POS 622M interface card

Using command line  You can change the interface speed of a super interface card to 155M POS/622M POS/GE using commands, so that you can have a wide range of interface speeds with limited investment.

155M POS/622M POS/GE Switchover Using Commands

155M 622M

?

Super interface card

slide-21
SLIDE 21

www.h3c.com

Interface Cards-WAN Interfaces

–8-port super interface card –Occupies one interface card slot –4x2.5G POS interface card –Occupies one interface card slot –1x10G POS interface card –Occupies one interface card slot Super PSP4L PUP1L

slide-22
SLIDE 22

www.h3c.com

Interface Cards-WAN Interfaces

–8xE1/T1+8xGE SFP interface card –Occupies two interface slots –1-port 155M CPOS+8-port interface card –Occupies two interface slots –2-port 155M CPOS+8-port interface card –Occupies two interface slots CL2G8L CL1G8L ET8G8L

slide-23
SLIDE 23

www.h3c.com

Interface Cards-RPR Interfaces

–1x10G RPR interface card (including one mate interface) –Supports 10G POS and 10GE modes –Occupies one interface card slot –2x2.5G RPR interface card (including two mate interfaces) –Occupies one interface card slot RUP1L RSP2L

slide-24
SLIDE 24

www.h3c.com

Interface Cards-Ethernet Interfaces

–10xGE SFP interface card –Supports 100M/1000M optical modules –Occupies one interface card slot –20xGE SFP interface card, 2:1 convergence –Supports 100M/1000M optical modules –Occupies two interface card slots –20xGE RJ45 interface card, 2:1 convergence –Supports 10M/100M/1000M autosensing –Occupies two interface card slots –1x10GE XFP interface card –Occupies one interface card slot GP10L GP20R GT20R XP1L

slide-25
SLIDE 25

www.h3c.com

Architecture-Hierarchical Multicast Replication

 Supporting three-level multicast replication, the SR8800 avoids wasting bandwidth while ensuring high- performance multicasting.  Routers not supporting switch fabric multicast replication have to treat multicasts as broadcasts, and send them to boards without multicast services, thus wasting bandwidth and decreasing router performance.

LPU N NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N QoS engine NP service engine

Table lookup engine QoS engine

LPU 0 NP service engine PIC

Table lookup engine

Port 0 Port 1 Port N PIC

Port 0 Port 1 Port N . . .

Crossbar Crossbar Crossbar Crossbar Data channel SRPU 0 SRPU 1

QoS engine NP service engine

Table lookup engine

QoS engine

Ingress buffer Egress buffer Incoming Packets cache Outgoing packets cache Ingress buffer Egress buffer Ingress buffer Egress buffer

Routing engine Routing engine OAM engine OAM engine Multicast traffic

Multicast egress interface Switch-fabric-level multicast replication: switch fabric replicates multicast traffic to all FAs with multicast services. FA-level multicast replication: FA replicates multicast traffic to other FAs and crossbars within the board. NP-level multicast replication: NP replicates multicast traffic to FA and its multicast egress interfaces. Multicast ingress interface

slide-26
SLIDE 26

www.h3c.com

Architecture-One Chassis-Four Planes Structure

NP NP NP Crossbar Crossbar CPU CPU CPU CPU CPU OAM engine OAM engine OAM engine

Line card monitor unit Line card monitor unit Line card monitor unit

System monitor unit

. . . . . . . . . . . . . . . . . . . . . . . .

Control plane Monitor plane OAM plane Forwarding plane LPU LPU LPU LPU SRPU Line card monitor unit OAM engine CPU NP

 The control plane comprises the SRPU CPU system, the LPU CPU system and components such as management channels on the backplane. Its main functions include protocol calculation, routing table maintenance, device management, and operation and maintenance management. It is the core of the router.  The forwarding plane comprises the switch fabric crossbars, the three forwarding engines, the data channels and other components on the backplane. It is responsible for processing various services and data forwarding, such as ACL/QoS, IP forwarding, MPLS VPN and multicast.  The OAM plane comprises the SRPU CPU, LPU OAM engines, and OAM channels. It is responsible for network protocols detection and service switchovers, such as BFD support for BGP/IS-IS/OSPF/RSVP/VPLS PW/VRRP, and 30ms fault detection and 20ms switchover of services .  The monitor plane comprises the control systems and channels. It is responsible for power and fan systems monitor, alarm and control.  The four planes are independent of one another.

slide-27
SLIDE 27

www.h3c.com

Architecture-Unique Open Application Architecture (OAA)

Other interfaces Storage System Other interfaces System Storage System Storage FW module **Service module Base board IPS module

 Based on the OAA, the SR8800 provides standard application interfaces, allowing customers and third-party vendors to develop their own services on it, such as embedded firewall, embedded IPS. This helps provide value-added services and speed up intelligent IP network development.  The SR8800 is the only open core router in the industry.

slide-28
SLIDE 28

www.h3c.com

 Core Router Development Tendency  H3C SR8800 Overview  H3C SR8800 Technical Characteristics

  • System Architecture
  • Service Capability
  • High Reliability
  • High Security

 H3C SR8800 Networking  H3C SR8800 Applications

Catalog

slide-29
SLIDE 29

www.h3c.com

Traffic classification

L2 L3 L4

CAR

Traffic policing Switch-fabric VOQ and E2E flow control Congestion Avoidance RED WRED

Granular QoS Capability

Incoming packets

GTS

L2 L3 L4

CAR

Traffic Classi ficatio n Traffic policing H-QoS queue scheduling PQ/LLQ/WFQ/CBWFQ

Ingress Buffer Egress Buffer 200ms packet buffering capability, supporting burst traffic Massive ACL rules, 64K/NP inbound CAR, granularity of 1K Switch-fabric-level deep QoS, supporting VoQ and E2Etraffic control and avoiding HOL 200ms packet buffering capability, supporting burst traffic Massive ACL rules, 64K/NP Outbound CAR, granularity of 1K H-QoS: three-level queue scheduling, supporting granular QoS Advanced congestion avoidance mechanisms Diverse shaping modes, port-based

  • r queue-based traffic shaping
slide-30
SLIDE 30

www.h3c.com

Origination of HQoS

OA Production Monitoring Phone OA Production Monitoring Phone OA Production Monitoring Phone 。 。 。 PE router

1 1 1 1 2 2 2 2 n n n n

Common QoS processing: schedule all traffic of all users uniformly according to their service priorities CE device CE device CE device

 Common QoS processes all traffic of all users uniformly according to the service

  • priorities. It cannot provide granular differentiation services based on users and the

service types of each user, that is, it cannot differentiate the service types of each user when differentiating traffic of users.  Hierarchical QoS (HQoS) provides granular differentiation services based on users and the service types of each user.

3 n 1 n 2 1 2 2 3

VPN 2 silver users VPN n bronze users VPN 1gold users The monitoring traffic of silver users and bronze users is sent, while the monitoring traffic of gold users is dropped because bandwidth is preempted. The OA traffic of bronze users is sent, while that of gold users and silver users is dropped, because bandwidth is preempted.

gold users and silver users do not get high SLA.

slide-31
SLIDE 31

www.h3c.com

VPN n VPN 2 VPN 1

HQoS Applications

OA Production Monitoring Phone OA Production Monitoring Phone VPN 2 silver users OA Production Monitoring Phone VPN n bronze users 。 。 。 PE router VPN 1 gold users

1 1 1 1 2 2 2 2 n n n n

CE device CE device CE device HQoS performs level-3 scheduling based on user service levels and provides high-quality QoS services for high- priority services. Perform level-2 scheduling according to user levels and provide different guaranteed bandwidth for different users. Perform level-1 scheduling based on service levels (schedule by packet priority) and provides high-quality QoS services for high-priority services.

3 2 1 1 2 1 1 2 3

The monitoring traffic of gold users and silver users is sent, while the monitoring traffic of bronze users is dropped. The OA traffic of gold users is sent, and that of silver users and bronze users is also sent.

All users are served according to their SLAs.

 HQoS enables each user to be served according to its SLA, thus strictly guaranteeing proper bandwidth for each user.

VPN 1

1 1 1 1

VPN 2

2 2 2

VPN n

n n

slide-32
SLIDE 32

www.h3c.com

HQoS Scheduling Model

Service 1 Service 2 Service 3 Service 4 VPN1 VPN2 VPN n

. . .

Phy sical port

Service 1 Service 2 Service 3 Service 4 Service 1 Service 2 Service 3 Service 4 Service class 1 Service class 2 Service class 3 Service class 4

Advanced H-QoS queue scheduling PQ/LLQ/WFQ/CBWFQ

Perform level-3 scheduling according to user service levels Perform level-2 scheduling according to user levels Perform level-1 scheduling according to service levels

 SR8800 supports three levels of HQoS queue scheduling, 384k queues, and provides granular SLA services for users.

slide-33
SLIDE 33

www.h3c.com

QoS-VOQ I

 On a cross road with only one lane, even if the northward road is idle, the ambulance (green car) behind the red cars cannot go through the crossroad to the northward road until all the cars in the front have passed the crossroad . Because there is only one lane, multidirectional scheduling is impossible. If congestion occurs in one direction, congestion also occurs in the other directions.  Such a congestion is called Head of Line (HOL) Blocking. Destination direction of the green car The green car is blocked Idle path Queuing for straightgoing Red light for straightgoing Green light for turning right Destination direction of the green car The green car passes through normally Idle path Queuing for straightgoing Red light for straightgoing Green light for turning right Destination direction

  • f red cars

Destination direction

  • f red cars

 As shown in the figure, add a lane for turning right. Even if the straightgoing lane is blocked, the lane for turning right is still available.  You can see that the best solution for HOL blocking is assigning different lanes for different directions. Add one lane for turning right Blocked path Blocked path

slide-34
SLIDE 34

www.h3c.com

QoS-VOQ II

 HOL blocking may also occur

  • n a router. As shown in the

diagram, user A, user B, server A, and server B are connected to the router through a 10G port respectively.

User A User B Server A Server B

User A sends data to server A at10 Gbps. User B sends data to server A at 5 Gbps.

5G 5G

If the outgoing port of the router is congested, the incoming port will be instructed to suspend data forwarding.

10 G

Congestion

  • ccurring to data

for Server A causes the data for server B to be blocked. Because of HOL blocking in the queue, the subsequent data cannot be forwarded in time. Lower priority data cannot be forwarded

  • ut of the outgoing port.

It is blocked.

User B to B to B to A to B to A to A to B to A

Crossbar

User B sends data to server B at 5 Gbps.

Congestion Congestion Congestion Congestion

slide-35
SLIDE 35

www.h3c.com

QoS-VOQ III

 The root cause for HOL blocking is that there is only one queue for all forwarding directions. You cannot perform queue scheduling independently for different forwarding directions. If an individual queue is available for each forwarding direction, you can optimize packet forwarding and avoid HOL blocking by round robin queue scheduling between different queues.

User B

Crossbar

to A to A to A to A to A to A to A to A to B to B to B to B to B to B Server A Server B

to server A to server B Queue scheduling between queues

 Virtual Output Queue (VOQ) is to implement multiple output queues for multiple output directions on a physical channel.  As shown in the diagram above, user B of the crossbar port has separate queues for server A and server B

  • respectively. With queue scheduling between different queues, the data to server B can be sent based on the

queue scheding, without having to wait till all data to server A is sent.

slide-36
SLIDE 36

www.h3c.com

QoS-VOQ IV

 As shown in the diagram below, assigning one output queue for one output direction just solves the HOL blocking but cannot schedule packets of different priorities in the same direction.

User B

Crossbar

to A to A to A to A to A to A to A to A to B to B to B to B to B to B Server A Server B High-priority packets Low-priority packets

Cannot schedule packets in the same VoQ according to packet priority.

User B

Crossbar

to A to A to A to A to A to A to A to A to B to B to B to B to B to B Server A Server B  As shown in the diagram below, assigning multiple output queues for one output direction can schedule packets of different priorities in the same direction, thus ensuring that high-priority delay-sensitive packets can be forwarded by the crossbar port preferentially to the destination direction.

The four VOQs of the egress to server A are scheduled by SP.

1 2 3

The four VOQs of the egress to server B are scheduled by SP.

1 2 3

slide-37
SLIDE 37

www.h3c.com

QoS-VOQ V

Crossbar

to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port0 to Port0 to Port0 to Port0 to Port0 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port1 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 to Port0 Port 0 has four priority VOQs

1 2 3

to M to M to M to M to M to M to M to M to Port10 to Port10 to Port10 to Port10 to Port10 to M to M to M to M to M to M to M to M to M to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 to Port10 Port 1 has four priority VOQs

1 2 3

Port 10 has four priority VOQs

1 2 3

Four global priority VoQs.

1 2 3

 The SR8800 VoQs reside in the fabric adapters (FAs) of LPUs. An FA provides four priority VoQs for each egress of the crossbar port.

VoQ in FA

 For multicast traffic and broadcast traffic, an FA provides four global VoQs (queues to M in the diagram). Global VoQs are dedicated to multicast traffic and broadcast traffic. They can forward data to multiple directions at the same time according to traffic priority, thus improving switching efficiency.  Comparison: The crossbar ports of the RSP720 engine of a Cisco 7600 also support cell-based switching and VoQs. However, the RSP720 engine provides only

  • ne VoQ for a direction. Therefore, it cannot schedule packets in the same direction according to priorities.
slide-38
SLIDE 38

www.h3c.com

QoS-E2E Flow Control

 Application scenario:  Server A, user B and user C are connected to the router through a 10G port respectively.  User C sends high-priority traffic at 10G to server A, and user B sends low-priority traffic at 10G to server A.

User C User B Server A 10G (high priority)

Congestion occurs on the outgoing port.

10G(low-priority) Congestion

slide-39
SLIDE 39

www.h3c.com

QoS-E2E Flow Control II

Crossbar

6G B + 6G C A

The bandwidth of a crossbar port is 12G, the traffic sent by user B and that by user C is congested on the crossbar port corresponding to server A. So 6G traffic is allowed for B, and 6G for C, respectively. Because the outgoing port bandwidth is 10G, for the 12G traffic from the crossbar, 6G high-priority traffic will be scheduled and forwarded preferentially, 4G low-priority traffic can be scheduled and forwarded, and the remaining 2G low-priority traffic will be dropped due to congestion in the low-priority queues.

 With the E2E flow control mechanism, when the outgoing port is congested, high-priority traffic will be scheduled preferentially, thus guaranteeing QoS for high-priority services.  Comparison: At present, only the SR8800 supports crossbar E2E flow control in the industry.

Tx Q notifies the FA of the congestion status of the low priority queues. FA notifies the

  • ther

FAs of the congestion of the low priority queues. FA suspends forwarding upon knowing that congestion occurs in the output queues.

10 G

10G of high-priority traffic is scheduled and forwarded. 10G of high-priority traffic is permitted to pass through

User C

C to A C to A C to A C to A

VoQ in FA 3 2 1 User B

B to A B to A B to A B to A

VoQ in FA 3 2 1 VoQ FA 3 2 1

A to A A to A A to A A to A B to A B to A B to A B to A

8 Priorities Tx Queues of port 3 2 1 7 6 5 4

C to A C to A C to A C to A

slide-40
SLIDE 40

www.h3c.com

QoS-Ingress Buffer

Crossbar

6G B + 6G C The bandwidth of a crossbar port is 12G, the traffic forwarded by B and that by C is congested on the crossbar port corresponding to server A. So 6G traffic is allowed for B, and 6G for C, respectively.

Because the outgoing port bandwidth is 10G, for the 12G traffic from the crossbar, 6G high-priority traffic will be scheduled and forwarded preferentially, 4G low-priority traffic can be scheduled and forwarded, and the left 2G low- priority traffic will be dropped due to congestion in the low-priority queues.

Tx Q notifies the FA

  • f the congestion

status of the low priority queues. FA notifies the other FAs of the congestion status of the low priority queues. FA suspends forwarding upon knowing that congestion occurs in the output queues.

VoQ FA 3 2 1

C to A C to A C to A C to A B to A B to A B to A B to A

8 Priorities Tx Queues of port 3 2 1 7 6 5 4

C to A C to A C to A C to A

Ingress Buffer User A Ingress 10 G

C to A C to A C to A C to A

VoQ in FA 3 2 1 10 G

B to A B to A B to A B to A

VoQ in FA 3 2 1 10 G

B to A B to A B to A B to A

Ingress Buffer User B Ingress 10 G

Packets are buffered in the Ingress Buffer

 As shown in the diagram above, when E2E flow control functions, ingress packets must be buffered. Otherwise, packet loss will occur.  The ingress buffer size determines the size of the total traffic that can be buffered, affecting the QoS capability of the router.  The Ingress Buffer of the SR8800 can buffer packets for 200 milliseconds.

slide-41
SLIDE 41

www.h3c.com

QoS-Egress Buffer

Ingress Buffer 802.3.1X

 When flow control is enabled on the routers at both sides, if the receiving router is congested, it sends 802.3x pause frames to instruct the sending router to suspend packet sending; upon receiving the pause frame, the sending router suspends sending packets to the receiving router for a certain period.  To avoid packet loss, the sending router buffers the packets in the Egress Buffer. The Egress Buffer size determines the packet buffering capability of the sending router.  The Egress Buffer of the SR8800 can buffer packets for 200 milliseconds.

8 Priority Tx Queues of port 3 2 1 7 6 5 4 packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet

packet packet packet packet packet

Congestion

Ingress Buffer is congested Forward packets Send 802.3x flow control information Suspend sending packets

Packets are buffered in the Egress Buffer

slide-42
SLIDE 42

www.h3c.com

QoS-Crossbar Cell-based Switching

Crossbar

FA fragments the packets into cells of 4 to 128 bytes. Cells are assembled into packets again

VoQ FA 3 2 1 8 Priority Tx Queues of port 3 2 1 7 6 5 4 Ingress Buffer Ingress VoQ in FA 3 2 1 packet Cell

Cell Cell Cell Cell Cell Cell Cell Cell Cell Cell Cell Cell Cell Cell Cell Cell packet packet packet packet

Cell

packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet packet Cell-based Switching on Crossbar

 The cell size is far smaller than the packet size. Therefore, cell-based switching can reduce jitter of traffic because of smaller granularity, smooth the total traffic, improve the system QoS capability, and avoid the case that short packets wait for long packets.  Usually cell-based switching uses fixed-length cells, which will cause the N+1 problem. Suppose the cell length is fixed to 128 bytes. A 262- byte packet will be segmented into two 128-byte cells. The remaining 6 bytes will be padded with 122 bytes to form a 128-byte valid cell for

  • switching. As a result, the 122 bytes of overhead is generated, with the overhead rate reaching 31.8%.

 The SR8800 Crossbar is capable of variable length cell switching. The cell length ranges from 4 to 128 bytes, that is, the cell length can be 4, 8, 12, 16, or 128 bytes. The variable length cell switching solves the N+1 problem effectively. For example, a 262-byte packet is fragmented into two 128-byte cells. The remaining 6 bytes are padded with another 2 bytes to form a valid 8-byte cell for switching, with the overhead rate being only 0.76%.  Comparison: Cisco 7600 adopts fixed length cell switching, so that the overhead is higher.

slide-43
SLIDE 43

www.h3c.com

MPLS VPN Service Isolation

 Distinguish different services, such as voice, video and data services on the PE device, encapsulate them to VPN, and implement security isolation for them  Carry multiple services with MPLS VPN, which provides security protection equivalent to leased line security.  Support HoPE to extend VPN  Support IPv6-based MPLS VPN (6PE)  Support access to MPLS VPN, PPP, ATM, and Eth/VLAN with various methods  Support static route, EBGP, RIP, and OSPF between PE and CE  Support cross-AS schemes, such as VRF-to-VRF, MP-EBGP, and Multi-Hop MP-EBGP  Support point-to-point Layer-2 MPLS VPN Martini / Kompella VLL  Support point-to-multipoint Layer-2 MPLS VPN Martini / Kompella VPLS

PE PE Data service Voice Video Others PE PE CE CE CE CE CE CE CE CE

VPN1 VPN2 VPN3 VPN4

slide-44
SLIDE 44

www.h3c.com Perform queue scheduling according to IP COS or DSCP to transmit multiple flows to the CE Map IP priority to the EXP field of the label. Perform queue scheduling on the

  • utgoing interface

P E P E P P P P CE CE CE CE MPLS P E

Identify the label and perform corresponding scheduling (based on bandwidth or priority) according to the EXP field of the label Classify service traffic and tag 802.1P, COS

  • r DSCP

IP Diffserv MPLS Diffserv IP Diffserv

QoS for MPLS VPN

slide-45
SLIDE 45

www.h3c.com

 Pseudo Wire Emulation Edge-to-Edge (PWE3) is a technology that emulates ATM, Ethernet and PPP services in a packet switched network. A pseudo wire encapsulates PDUs of specific services , carries the PDUs over the path or tunnel between inbound interface and outbound interface, manages the timing and sequence of the PDUs, and emulates the functions required by these services.  The SR8800 supports the PWE3 feature and 16K PWE3 connection, fully satisfying users’ requirements for PWE3 connections.

Link Emulation by PWE3

PE PE

slide-46
SLIDE 46

www.h3c.com

VPN-A/Site2 VPN-B/Site2

Backbone

PE 2

P

CE-A2 CE-B2 VPN-A/Site1 VPN-B/Site1 CE-A1 CE-B1

PE3

MPLS Core IBGP IBGP IBGP

PE1

VPN-A/Site3 CE-A3

Multicast VPN

Multicast source

Receiver Receiver Receiver

 MPLS/BGP VPN has been widely applied. Users in a VPN requires the VPN to provide multicast services.  In the history version of draft-rosen-vpn-mcast, there were three MVPN solutions. In the latest version 08, two of them are removed and the MD solution is kept. Multicast Domains VPN-IP PIM MD using PIM NBMA techniques Advantages PIM status is controlled, and the backbone is stable. Multicast is used on the backbone, so P routers need not to be upgraded The backbone uses multicast. If no multicast is in the VPN, the backbone does not generate multicast status. The multicast routes are optimized and traffic

  • nly goes to the necessary PE routers.

The backbone has no PIM status. Disadvantages Multicast traffic is flooded to all PE routers, which brings burdens to the PE routers. The size of the multicast status table cannot be controlled, impacting the stability of the backbone. It is required to modify the PIM mechanism

  • f PE and P devices and to add support for

multicast label. Traffic is duplicated on the PE, which makes the PE overloaded. Besides, unnecessary traffic brings extra burden to the backbone.  The SR8800 supports MD multicast VPN, which can be distributed on separate service cards, providing high performance and flexible configuration.

slide-47
SLIDE 47

www.h3c.com

 PIM-SM or PIM-SSM runs within the domains, and MBGP and MSDP run between the domains. All ASs are required to support PIM, MBGP and MSDP.  PIM / MBGP /MSDP solution is a mature solution for inter-domain multicast network.  Between ASs, external MBGP peers are configured for edge routers, and external MSDP peers are configured for RPs, . Within an AS, internal MBGP peers are configured for internal routers as required, and internal MSDP peers are configured for internal RPs running Anycast RP. All ASs run PIM protocol.

TV PC Multicast switch Home GW TV PC Home GW TV PC Home GW

MAN MAN MAN

PIM/PIM6 PIM/PIM6 PIM/PIM6

Multicast source Multicast source

RP RP RP RP RP RP MBGP MBGP MBGP MSDP MSDP MSDP Anycast RP MBGP MSDP IGMP Proxy/Snooping IGMP Proxy/Snooping IGMP Layer-2 network Layer-3 network

Multicast

Multicast switch Multicast switch

slide-48
SLIDE 48

www.h3c.com

NAT/NAT Multi-Instance

10.1.1.3 10.1.1.20 202.10.88.2 Private network address Public network address Internet NAT 10.1.1.3 Web server 10.1.1.4 Mail server  Support repeated multiplexing of a port and automatic 5-tuple collision detection, enabling NAPT to support unlimited connections  Support backlist in NAT/NAPT/internal server  Support limit on the number of connections  Support session log  Support multi-instance

slide-49
SLIDE 49

www.h3c.com

Xlog NTAS (Network traffic analysis system)

Packet

Netstream

Resolve NTE packets Collect statistics information to the database Analyze data and generate traffic reports Analyze network packets Extract traffic statistics matching the preset criteria Output traffic statistics

 NetStream cooperates with XLog seamlessly to provide accurate and detailed network traffic analysis reports.

Major functions

Automatic generation of dozens of predefined reports concerning traffic, application, node and session Flexible report customization and traffic auditing Traffic monitoring and analysis for P2P application Traffic monitoring based on MAC address and host name Working with AAA server to provide detailed network access information Reference for network planning, network optimization, and troubleshooting

NetStream--Network Traffic Analysis

slide-50
SLIDE 50

www.h3c.com

RPR

 Resilient Packet Ring (RPR), a new protocol on MAC layer, has the following advantages: High usage of ring bandwidth Self-healing Automatic topology discovery, and node plug-and-play Protection switching using Steering or Wrapping, with fast recovery time as 50ms, satisfying the carrier-class requirement Weighted fair algorithm for bandwidth allocation  Comply with the IEEE 802.17 standard  Support 10G/2.5G RPR  Support cross-board RPR PE PE PE SITE 1 SITE 6 SITE 3 SITE 8 SITE 5 SITE 2 SITE 7 SITE 4 PE PE

slide-51
SLIDE 51

www.h3c.com

 RPR supports three traffic levels:  Class-A: Provide low-jitter and assured bandwidth to support TDM service. Class-A includes A0 and A1. For A0 service, the assured bandwidth is reserved globally in the ring, while for A1 and class-B services, the assured bandwidth can be recycled and used by lower-priority services when it is idle.  Class-B: Provide low-delay and assured bandwidth to differentially send data according to priority  Class-C: Best-effort traffic, such as traditional IP services  Service category:  On a RPR port, data packets can be mapped to class- A, class-B and class-C services according to the COS value, EXP priority or IP priority.  For class-A service, we can use the keyword reserved in the rate-limit command to set A0 threshold, and the exceeding part will become A1 automatically.  For class-B service, we can use the keyword medium in the rate-limit command to set B0 threshold, and the exceeding part will become B1 automatically.

Service category Processing method A0 The assured bandwidth is reserved globally in the ring A1 The assured bandwidth can be recycled and used by lower- priority services when it is idle B0 B1 Best-effort traffic C

RPR--QoS

slide-52
SLIDE 52

www.h3c.com

IPv4 network

IPv6 backbone

IPv4-IPv6 dual-stack network

IPv6 network NAT-PT IPv4 access IPv6 access Tunnel access IPv4 network

NMS center  Support IPv6. The network can be smoothly upgraded from IPv4 to IPv6 without additional investment.  IPv6 protocol stack: ICMPv6, Path MTU, ND, automatic configuration, and DNS Client  IPv6 transition mechanisms: dual-stack, NAT-PT, automatic tunnel, configuration tunnel, and 6to4 tunnel  IPv6 routing protocols: BGP4+, IS-ISv6, OSPFv6, and RIPng

Smooth Transition to IPv6 without Additional Investment

slide-53
SLIDE 53

www.h3c.com

 Core Router Development Tendency  H3C SR8800 Overview  H3C SR8800 Technical Characteristics

  • System Architecture
  • Service Capability
  • High Reliability
  • High Security

 H3C SR8800 Networking  H3C SR8800 Applications

Catalog

slide-54
SLIDE 54

www.h3c.com

Service reliability

Comprehensive Product Reliabilities

Network reliability Link reliability Device reliability

  • Physical reliability: dual-SRPU, dual power modules, hot-swappable
  • Software reliability: hot fixes, host attack prevention, control plane

rate limit , secure management

  • Ethernet port bundle, multi-link bundle, IP Trunk
  • Uninterrupted NSF/GR forwarding, Virtual Router Redundancy

Protocol (VRRP), ECMP, dynamic fast route convergence, BFD for VRRP/BGP/IS-IS/OSPF/RSVP

  • Control separated from services, service processing

isolated, TE FRR/IP FRR/LDP FRR/VPN FRR

slide-55
SLIDE 55

www.h3c.com

SRPU + switching network SRPU + switching network L P U L P U L P U L P U L P U L P U L P U L P U Power supply Power supply

High-performance SRPUs provide 1 + 1 redundancy. The switch-fabric integrated

  • n the SRPUs implement

non-blocked packet exchange. The LPU + flexible subcards structure provides distributed service processing functions and diversified interface types. Power modules providing 1 + 1 redundancy

Hardware Design with High Reliability

All the modules are hot-swappable.

slide-56
SLIDE 56

www.h3c.com

Code block Code block Original code block Code block Code block Code block Original program Fix code area Online loading  Allows for online software debugging and minor addition of new features without resetting the device  Provides the command for switching between fix unit states so that you can conveniently load, activate, deactivate, run, or delete the fix unit Enhanced code block

Supports Online Loading of Software Hot Fixes

Fix code Online fix loading enables flexible flaw patching, ensuring the reliable and uninterrupted network service delivery. Replacing the original code block with the enhanced fix code block

slide-57
SLIDE 57

www.h3c.com

IP/Port Trunk Provides Higher Line Bandwidth and Reliability

LPU 0 LPU 1 LPU 2 LPU 3 LPU 4 IP/Port Trunk IP/Port Trunk IP Trunk/Port Trunk implements intra-board bundle, maintaining normal operation when a physical fault occurs

  • n one link.

IP Trunk/Port Trunk implements inter-board bundle to provide higher reliability.  Supports POS IP Trunk  Supports Ethernet IP Trunk  Supports Ethernet Port Trunk  Supports QoS on a Trunk interface

slide-58
SLIDE 58

www.h3c.com

Link Reliability - Equal Cost Multipath (ECMP)

Has h

Source IP Destination IP MPLS Label 。。。

OSPF

ECMP

Access layer Convergence layer

 Supports ECMP, with each route supporting up to eight equal-cost paths  Supports load balancing of IP or MPLS traffic  Supports per-stream Hash load balancing, minimizing packet disorders  Ensures service reliability by diverting the traffic to other active paths after a path switchover

slide-59
SLIDE 59

www.h3c.com

VRRP Router Redundancy

A B VRRP

GW:10.1.1.1 GW:10.1.1.1 GW:10.1.1.2 GW:10.1.1.2 Monitored interface VRID1 :10.1.1.1 VRID2:10.1.1.2 VRID1 master VRID2 backup VRID1 backup VRID2 master

internet

VRRP operation

Implementing gateway redundant backup and load sharing by configuring multiple VRRP groups

 Supports BFD for VRRP to provide faster switchovers

BFD for VRRP

slide-60
SLIDE 60

www.h3c.com

Uninterrupted Service During an Active-Standby Switchover

FIB FIB FIB FIB Crossbar Contro l IPC

Active

Control

Original protocol session is switched off Protocol session remains uninterrupted. Standby

Control Contro l

 During an active-standby switchover, intra-board/inter-board data forwarding/services remain uninterrupted, realizing NSF.

slide-61
SLIDE 61

www.h3c.com

Full Graceful Restart Support

FIB FIB FIB FIB

Crossbar

Active Standby

Adjacent router

Notify the router to start GR, so that routes will not be deleted during short interruptions. Session continues after the switchover to implement graceful restart.

Adjacent router

 GR features are fully supported, including GR for OSPF, IS-IS, BGP, LDP, and RSVP  The network remains stable during the active-standby switchover. After the switchover, the device quickly learns the network routes by communicating with adjacent routers.  Forwarding remains uninterrupted during the switchover to realize NSF.

slide-62
SLIDE 62

www.h3c.com

BFD Support

SRPU LPU SRPU LPU

Universal fast handshake (10 ms) Fault alarm Bidirectional forwarding detection

 BFD: Bidirectional forwarding detection (IETF specifications). BFD is a technology that provides fast detection of node and link faults, with the default handshake time of 10 ms, which is configurable.  BFD provides light-load, quick, and real-time detection for any medium and protocol layer, with a wide range of detection time and costs.  BFD can detect faults on any type of paths between systems, including direct physical links, virtual circuits, tunnels, MPLS LSPs, multi-hop route paths, and indirect paths.  BFD detection results can be used for GP fast convergence, FRR, and so on.  BFD is accepted in the industry and has been widely deployed.  The SR8800 supports BFD for BGP/OSPF/IS-IS/RSVP/VRRP and provides link fault detection within 30 ms.

slide-63
SLIDE 63

www.h3c.com

Core node Convergence/access node Convergence/access node Active path Standby path Active path Standby path BFD FRR

Fast Switchover upon Link Failure: IP/LDP FRR

 BFD is enabled on the link to detect link failures.  The nodes are configured with backup ports, routes, and LSPs. Local implementation requires no cooperation of adjacent devices, simplifying the deployment.  Solves the traditional convergence faults in IP forwarding and MPLS forwarding, protecting the links, nodes, and paths without establishing respective backup LSPs for them.  Realizes restoration within 50 ms, with the restoration time independent of the number of routes and fast link switchovers without route convergence. Currently, this technology is very mature and has been widely deployed.

slide-64
SLIDE 64

www.h3c.com

Fast Path Switchover Through FRR

CR BR AR

  • 1. Fast fault detection: BFD/fast HELLO packets
  • 2. Switching to the pre-set path
  • 3. IGP/LDP convergence along the backup path

1 2 3 3 3 3 3 3 2 FRR type Protection mode Convergence time IP FRR Node-by-node protection 50 ms LDP FRR Node-by-node protection 50 ms TE FRR Node-by-node protection, port-to-port protection 50 ms

slide-65
SLIDE 65

www.h3c.com

Core node P node PE node PE node LAN PE node PE node

Fault detection and service switchover are implemented within 50 ms, which is much shorter than the restoration time of 10 seconds in traditional MPLS.

PE Node Reliability Protection: VPN FRR

InLabel1 InLabel2 IP address VPN1 OutLabel1 OutLabel2 Route NHFEC Communications network CE dual-home CE dual-home

 The service system is connected to the SR8800 device in the bearer network through the dual links.  Multi-Hop BFD is enabled between the SR8800, which works as a PE device, and the peer PE device for fast detecting the connectivity of the peer PE device.  The active-standby forwarding entries pointing to the active and standby PE devices are configured in advance on the peer PE device. This solves the problem of long end-to-end service convergence caused by PE node faults in the MPLS VPN network with CE dual-home PEs; in addition, this allows the PE node restoration time independent of the number of private network routes borne by the PE nodes.

slide-66
SLIDE 66

www.h3c.com

 Core Router Development Tendency  H3C SR8800 Overview  H3C SR8800 Technical Characteristics

  • System Architecture
  • Service Capability
  • High Reliability
  • High Security

 H3C SR8800 Networking  H3C SR8800 Applications

Catalo Catalog

slide-67
SLIDE 67

www.h3c.com

 Advanced system architecture, rich security protocols, and strict service access control make the core router a security gateway for service access.

Routing Security Service Access Security Management Security Forwarding Security

SSH RADIUS TACACS+ SYSLOG Massive bi-directional ACL rules URPF NETSTREAM MIRROR

Routing protocol MD5 authentication Isolation between Management & service planes Secure Commware routing software system

ARP packet rate limit Address binding

Control information filtering and restriction

NQA Line rate IPS/FW/IPSec Port isolation within a VLAN Broadcast/abnormal traffic suppression

Comprehensive Security Features

slide-68
SLIDE 68

www.h3c.com Crossbar LPU CPU Forwarding plane SRPU CPU

Data flow Control flow

Support for Device Control Plane Protection

Packet filter Delivered to the control plane for strict line rate protection

DoS attack prevention

 Three-level protection enables the SR8800 invulnerable to network attacks.

slide-69
SLIDE 69

www.h3c.com

Interface

NP

Switched network Interface

NP NP

Interface

NP

Interface

CPU Destination IP Next hop Outbound interface 202.98.3.0 202.93.3.1 POS3/1/1 10.10.87.0 10.10.87.0 ……

GE2/1/1 GE2/1/2 POS3/1/1

202.98.3.5 10.10.87.3 Data 202.98.3.5 10.10.87.3 virus

Attacking packet Normal packet

GE2/0/1  The attacking packet uses the same destination & source IP addresses as that of the normal packet,

  • r uses a randomly generated source IP address.

 Before forwarding a packet based on its destination IP address, the SR8800 checks the reverse path route of the packet. If the reverse path exists and the inbound interface is correct, the packet is forwarded as normal.  So the normal packet can be forwarded with no problem. But the attaching packet will be discarded due to lack of reverse path route or incorrect inbound interface.  Prevents source spoofing and distributed attacks.  Supports distributed UFPR.

URPF Secure Forwarding

slide-70
SLIDE 70

www.h3c.com

IPS/FW

Web filter HTTP Delivering the filtered packets Virus filter HTTP FTP SMTP POP3 IMAP Junk mail filter SMTP POP3 IMAP

Original packets (possibly with virus) Original packets (with clean content)

Step 1 Typical check

  • Perform deep inspection

for all incoming packets.

  • Verify source and

destination addresses of all incoming packets.

  • Verify protocols.

Step 2 Data standardization

  • Enqueue all the packets that

have passed the verification.

  • Perform standard reassembly
  • f the received packets to

provide an overall context for data transmission

Step 3 Signature-based filter

  • Protocol-based anomaly filter
  • Traffic-based anomaly filter
  • Vulnerability-based filter
  • Mass protocol decoder

Step 4 Context reassembly

  • All packets that have passed the

secure broadband IPS have their content fully checked and cleaned (or isolated), and are then delivered to the protected user network.

slide-71
SLIDE 71

www.h3c.com

ARP Attack Prevention

 An attacker sends ARP packets using the MAC address of other users so as to modify the corresponding ARP entries maintained by the gateway, possibly causing network disconnection of valid users.  To guard against such attacks, the SR8800 provides the following two methods:

 Fixing the MAC address. The SR8800 fixes (freezes) the MAC address of dynamic ARP entries that it learns until the ARP entries are

  • aged. This method, which prevents the ARP

entries of valid users from being modified, can be implemented in the fixed-mac and fixed-all

  • modes. fixed-mac mode does not allow

modifying the MAC address, but allows modifying VLAN ID and port number of an ARP entry; while fixed-all mode allows no modification of the MAC address, VLAN ID, or port number of dynamic or resolved non- permanent static ARP entries.  Active acknowledgement through send-ack. The SR8800 does not modify an ARP entry directly even if an ARP packet with the same IP but a different MAC address is received; instead, it sends a unicast to the original user for

  • confirmation. If a reply is received within a

certain period, the MAC address cannot be updated within the following one minute (Similarly, a new ARP entry cannot be modified within one minute since it is generated); if no reply is received, a unicast is sent to the new

  • user. After a reply is received, the ARP entry is

updated and the new user becomes valid.

ARP ARP IP :10.1.1.1 MAC:0002:5547:bc34 IP :10.1.1.20 MAC:0009:6b71:877e IP :10.1.1.50 MAC:0010:a4aa:36db

MAC IP 0009:6b71:877e 10.1.1.20 0010:a4aa:36db 10.1.1.50 0009:6b71:877e 10.1.1.50 MAC IP 0002:5547:bc34 10.1.1.1 0010:a4aa:36db 10.1.1.50 MAC IP 0009:6b71:877e 10.1.1.20 0002:5547:bc34 10.1.1.1 0009:6b71:877e 10.1.1.1

slide-72
SLIDE 72

www.h3c.com

Address Scanning Attack Prevention

 In an address scanning attack, the attacker sends a large number of packets with various destination IP addresses to the target network. If the attacker scans the whole segment of a network device, it forces the network device to send ARP packets to all addresses of the segment. In addition, the network device needs to send a host unreachable message for non-existent addresses. If the network segment is large enough, the attacking traffic will consume too much CPU and memory resources, causing network failures.  The SR8800 can implement address scanning protection. If a router receives a packet with the destination IP address being a network segment, but no such a route is available, it sends an ARP request and then generates a discard entry based on the destination IP address, preventing severe impact on the CPU by such packets. If an ARP reply is received, the discard entry is removed immediately and a route entry is added; if not, the discard entry is aged out after a certain period. As such, this mechanism prevents the address scanning attack, and at the same time ensures proper processing for normal traffic. A B C D

B D 192.168.10.100/24 192.168.10.101/24 192.168.10.102/24

Forging a large number of packets

Destination unreachable

Drop

slide-73
SLIDE 73

www.h3c.com

MAC Address Flooding Attack Prevention

 In an MAC address flooding attack, the attacker sends packets with forged, continuous changing MAC addresses. The network device learns the MAC addresses upon receiving those packets, and writes entries in the MAC address table. If the MAC address table is filled up with entries of the forged MAC addresses, the valid MAC address entries cannot be

  • added. Futhermore, such attacking packets will be broadcast within the VLAN, severely impacting router performance

and normal services on the network.  The SR8800 prevents the MAC address flooding attack by limiting the number of MAC address entries associated with a given port or VLAN. You can set the maximum number of MAC address entries that can be kept for a port or VLAN, depending on the number of hosts attached to IT. You can also configure whether to forward a pacet received from a port with maximum entries but with a new MAC address. By this, you can prevent the broadcasting of unknown packets within the VLAN, which affects other devices.

1/0/15 00:0e:00:aa:aa:aa 00:0e:00:bb:bb:bb 00:0e:00:bb:bb:bb

A B C D

B D

ABC ABC Each port allows no more than three MAC addresses

Port security MAC address flooding attack

Forges a large number of MAC addresses

slide-74
SLIDE 74

www.h3c.com

STP Attack Prevention

Becomes root bridge after sending BPDUs

Root Root Blocked  STP Attack  The attacker can obtain network topology information that is confidential to it.  Although STP takes link speed into consideration, the attacker that becomes root can still turn the gigabit backbone to a 10-megabit, half-duplex network resulting from a topology change.  BPDU guard prevents specific ports from participating in STP. The untrusted ports are shut down upon receiving BPDUs from other routers, preventing access of unauthorized routers.  Root guard prevents a newly added router from becoming root. If the router attempts to be root, the root-guard-enabled port stops working.

BPDU

Root Blocked

BPDU guard BPDU guard Root guard

BPDU

slide-75
SLIDE 75

www.h3c.com

 Core Router Core Router Development Development Tendency Tendency  H3C SR8800 H3C SR8800 Product Product Overview Overview  H3C SR8800 H3C SR8800 Technical Technical Characteristics Characteristics  H3C SR8800 H3C SR8800 Networking Networking  H3C SR8800 H3C SR8800 Applications Applications

Catalo Catalog

slide-76
SLIDE 76

www.h3c.com

Enterprise Network

SDH

Internet

Branch Office Branch Office

N*E1 IPSec LAN 2 LAN 2 L2TP+IPSec

Mobile Office Mobile Office

SSL

Internet

Internet Access LAN 1 MPLS

SR8800 Implementation in Enterprise Network

 Internet Access: NAT, Firewall;  Core Node for LAN;  MPLS for LAN isolation;  E1 aggregation;  VPN Aggregation: IPSec, L2TP+IPSec, SSL.

slide-77
SLIDE 77

www.h3c.com

Service Router in Metro Ethernet

P P P

Backbone

Internet

Application

AccSwitch

Access

DSLAM CMTS MSCG

SR8800 Edge

AggSwitch AggSwitch AggSwitch AggSwitch

Aggregation

AggSwitch AggSwitch

SR8800 Full VPN Support:

 MPLS L3 VPN, Inter AS Option A, B, C;  MPLS L2 VPN, VLL/VPLS/H-VPLS;  MPLS PWE3 Emulation;  VLL/VPLS to MPLS L3 VPN;  Multicast VPN; SR8800 Multi Features:  Multi WAN interfaces: POS, E1, ATM;  HQoS: Three levels of HQoS queue scheduling, 384k queues.  Availability: Hot Patches, Redundancy Design.

slide-78
SLIDE 78

www.h3c.com SR8812/SR8808 GE/10GE

MPLS VPN Network

Aggregation layer

155M POS N*E1/E3 2.5G CPOS/155M POS 622M CPOS/155M CPOS 155M POS N*E1/E3 SR8812/SR8808 SR8808/SR8805 SR8808/SR8805 SR8808/SR8805

Core layer Backbone network

GE GE 10GE 10GE

Data center

SR8812

VPN1 VPN2 VPN1 VPN2 VPN1 VPN2

155M POS N*E1/E3

MAN A MAN B MAN C

SR8805 SR8805 SR8805 PE PE PE PE PE PE PE GE/10GE GE/10GE GE/10GE SR8802 PE SR8802 PE SR8802 Multicast server

slide-79
SLIDE 79

www.h3c.com

 Core Router Core Router Development Development Tendency Tendency  H3C SR8800 H3C SR8800 Product Product Overview Overview  H3C SR8800 H3C SR8800 Technical Technical Characteristics Characteristics  H3C SR8800 H3C SR8800 Networking Networking  H3C SR8800 H3C SR8800 Applications Applications

Catalo Catalog

slide-80
SLIDE 80

www.h3c.com S8512 S8512 S8512 S8512

The previous network used a star topology, which has poor redundancy and scalability. In addition, there existed single-node failures. The current network uses 4 SR8800 to build a gigabit Ethernet ring, which has both device and link redundancies. It eliminates single-node failures and improves the network scalability. The current network can be upgraded to a 10 GE ring easily.

Guangzhou E-Government Network

NE05 NE05 NE05 NE05 NE05 NE05 NE05 NE05 NE05 NE05 NE05

Health care network

C7513 C7513 C7513 C7513

Government information center node Residence area node Committee core node Government core node

SR8812 SR8812 SR8812 SR8812 S6506 S6506

Municipal government building Municipal committee building

slide-81
SLIDE 81

www.h3c.com

IDS system 2 Fudan guanghua S-Audit system

ISP1 ISP2

SR8812 SR8812 S8512 S8512 S8512 S8512

Administration center Central computer room

Outside Outside Outside Outside

DMZ government website cluster

S5516 S5516 S5516

In the previous network, business units are not clearly defined and information exchange is not secure among units. In addition, difficulties are encountered in network expansion and in data exchange among business units. After network reconstruction using MPLS VPN, inter-access within a business unit is allowed while communication among different units is

  • segregated. All networks can run on a unified

network platform, without the need of changing the previous applications and addresses. Based on the MPLS/BGP implementation, VPNs with QoS features are supported. Based on the MPLS/BGP implementation, nodes can be easily added to or deleted from the network. Adding new nodes only requires configuration on the PE, and the businesses on new nodes will not impact the previous businesses.

Nantong E-Government Network

slide-82
SLIDE 82

www.h3c.com

S3050C

SR8805

Data center server cluster

SecPath1000F S8500 S3050C S3050C

Access cluster within the municipal government building Government affairs server cluster

Suzhou E-Government Network

As a county-level node of Suzhou E-Government Network, Zhangjianggang E-Government Network is connected to the Suzhou E-Government Network through an SR8805. The SR8805 has NAT function for VPNs, which can translate possibly duplicate private network addresses of VPNs into public IP addresses of the e-government network, so that the public resources can be shared. Moreover, the NAT processing of the SR8800 are based on hardware, allowing wire-speed forwarding without impacting the services.

Zhangjiagang E-Government Network

slide-83
SLIDE 83

www.h3c.com

2GE Backbone NE40- 8 PE PE PE MSTP 100M MSTP 100M GE GE 光

Municipal bureau building Municipal bureau

VLAN- VPN …… ……

Provincial E-Government Network

Provincial level Municipal level Yuecheng district 100 M PE County

Municipal bureau Municipal bureau Municipal bureau

County level …… County MST P

Municipal bureau

……

Municipal bureau

G E Internet G E Internet PE SR8808 SR8805

Shaoxin E-Government Network

slide-84
SLIDE 84

www.h3c.com Provincial government

P

Resource sharing E-government network

VPN2 VPN3

CE CE CE

Users outside the municipal government building

Huangyan district Jiaojiang

Wenlin

Sanmen county Tiantai Xianju

CE Aggregation layer switch

Interne t

Citizens

SR880 5 SR8805 S3552F S3552F

Users outside the municipal government building

People on business trip

Authentication

Secpath 1000

VPE S880 5 PE S880 5 PE S880 5 PE

Public servers VPN1

PE P

NE40

CE CE CE CE CE CE

PE

Aggregation layer switch S3552F S3552F S880 5 PE S880 5 PE S880 5 PE SR8805 IPS 100E

Taizhou E-Government Network

slide-85
SLIDE 85

www.h3c.com Xiacheng District

PE PE

Gongshu district

PE

Civil Affairs Bureau Citizen card server City Administration Bureau

PE PE P/PE-SR8805

Law Enforcement Bureau

PE

Xihu district

PE PE PE

Community 1 Community 2 Community 3 Community 4 Shangcheng district

P/PE-SR8805

Administration Information Exchange Center

S3552P S3552P IPSec VPN网关

Digital city administration terminal

Community contact station Digital city administration terminal Citizen card service point

PE AR4640 Secpath1000

Unicom L3 VPN Telecom L3 VPN

Hangzhou Netcom L2 MPLS-VPN AR4640 AR4640 AR4640 Secpoint

This network uses 2 SR8805, 7 AR4640, dozens of AR2800, 8 Secpath1000 and hundreds of Secpoint.

Hangzhou Digital City Administration Network

slide-86
SLIDE 86

www.h3c.com

MSTP Network SR8805 NE20E-8 NE20E-8 NE20E-8 NE20E-8 NE20E-8 NE20E-8 NE20E-8 Hangzhou Public Security Bureau

Hangzhou Public Security Network

1000M link to construct Existing 1000M link Police school Office No.8 Shangc heng branch Xiache ng branch Jiangg an branch Gongs hu branch Xihu branch Scenic area branch Develo pment zone branch Binjiang branch Traffic branch Traffic Crime investig ation branch Special force branch

Xiaoshan district Yuhang Fuyang Tonglu Jiande Lin’an Chun’an

S3552F-HI S3552F-HI S3552F-HI S3552F-HI

Good connection with Cisco product

slide-87
SLIDE 87

www.h3c.com

The Network of Ningxia Administration for Industry and Commerce (AIC) is a typical wide area network. The second level network nodes, which are city level AIC branches, connect to the Ningxia AIC network through dedicated SDH lines. As the core device of the Ningxia AIC network, the SR8800 series routers provide E1 interface cards and GE cards, protecting user’s previous investment.

Second level network nodes

SR8805 SR8805

First level network nodes Data center Network management platform N*2M E1

District- level AIC

FE E1 E1 E1 E1 AR28

Network of Ningxia Administration for Industry & Commerce

slide-88
SLIDE 88

www.h3c.com

FE

SR8812 SR8812

FE Server cluster

N*2M E1 PC

City node 1

PC PC N* 2M E1 PC

City node 18

PC PC

GE GE

Henan Local Taxation Network

slide-89
SLIDE 89

www.h3c.com

Data center of Anhui Provincial Department of Labor and Social Security 17 city- level data centers Access nodes

S7506R_1 R3600_A

AR4640

S7506R_ 2 R3600_B

S7500_1 AR4640 SDH

SR8805_1

Ministry of Labour and Social Security

ISP SDH-2M ISP SDH-2M S7500_2 NE20E_1 NE20E_2 PSTN District/county/community/hospital/pharmacy shop, and so on, access the network through E1/MODEM/ADSL/LAN CPOS 155M CPOS 155M E1 MSR50-40 E1 E1/ISDN E1/ISDN MSR50-40 S7500_1 S7500_2 NE20 E AR4680

SR8805_2

Anhui Golden Social Security Project Network

slide-90
SLIDE 90

H3C Technologies Co., Limited. www.h3c.com