FUTURE INTERNET Testbed @TWAREN Che-Nan Yang NCHC,Taiwan Overview - - PowerPoint PPT Presentation

future internet testbed twaren
SMART_READER_LITE
LIVE PREVIEW

FUTURE INTERNET Testbed @TWAREN Che-Nan Yang NCHC,Taiwan Overview - - PowerPoint PPT Presentation

FUTURE INTERNET Testbed @TWAREN Che-Nan Yang NCHC,Taiwan Overview OpenFlow Testbed in TWAREN HPDMnet Multicast Streaming with OpenFlow Future Work 2 Future Internet There are many serious limitations in current Internet.


slide-1
SLIDE 1

FUTURE INTERNET Testbed @TWAREN

 Che-Nan Yang

NCHC,Taiwan

slide-2
SLIDE 2

Overview

 OpenFlow Testbed in TWAREN  HPDMnet Multicast Streaming with OpenFlow  Future Work

2

slide-3
SLIDE 3

Future Internet

 There are many serious limitations in current Internet.

 Scalability  Security  QoS  Virtualization

 Future Internet is a summarizing term for worldwide

research activities dedicated to the further development of the original Internet. (From Wiki)

3

slide-4
SLIDE 4

Future Internet Testbed

 For innovations and researches in Future Internet,

the testbed requires some advanced concepts:

 Programmability  Virtualization  End-to-end slice

4

slide-5
SLIDE 5

OpenFlow

 Make deployed networks programmable  Makes innovation easier  No more special purpose test-beds  Validate your experiments on production network at full line

speed

5

slide-6
SLIDE 6

TWAREN

NOX OpenFlow Switch

iCAIR

Capsulator Capsulator Capsulator

OpenFlow Network @NCKU OpenFlow Network @KUAS

OpenFlow Testbed in TWAREN

We do not have pure Layer2 network in TWAREN, using the Ethernet- in-IP tunnel instead.

6

slide-7
SLIDE 7

Slice Isolation Problem (1/2)

T r a f f i c b e t w e e n N C K U & N C H C T r a f f i c a l s

  • m

i r r

  • r

s t

  • K

U A S K U A S a l s

  • s

e e s t r a f f i c b e t w e e n N C K U & N C H C

7

slide-8
SLIDE 8

Slice Isolation Problem (2/2)

eth0 eth0 eth1 eth1 Capsulator @ NCHC Capsulator @ NCHC Tunnel port thread Tunnel port thread Border port thread1 Border port thread1 Border port thread2 Border port thread2 Tunnel port Border port Listen all the packets from border port Listen the MAC-in-IP packets

Both threads listen the common physical interface

User space Kernel space

8

slide-9
SLIDE 9

Capsulator + Open vSwitch (1/2)

Open vSwitch kernel module Open vSwitch kernel module eth0 eth0 eth1 eth1 Capsulator @ NCHC Capsulator @ NCHC Tunnel port thread Tunnel port thread Border port thread1 Border port thread1 Border port thread2 Border port thread2 Tunnel port Virtual border port User space Kernel space tap tap border port tap 1 tap 1

Flow Table

9

slide-10
SLIDE 10

Capsulator + Open vSwitch(2/2)

eth0 eth0 eth1 eth1 Open vSwitch kernel module Open vSwitch kernel module tap0 tap0 Capsulator Capsulator OvS-openflow daemon OvS-openflow daemon eth0, eth1: physical interfaces tap0, tap1: virtual interface system call interface bridge interface OvS-controller OvS-controller OpenFlow protocol

Using the OpenFlow- enabled Open vSwitch to isolate traffic in both slices Using the OpenFlow- enabled Open vSwitch to isolate traffic in both slices

tap1 tap1 flow-based switching

10

slide-11
SLIDE 11

Ethernet-in-IP Tunnel

11

slide-12
SLIDE 12

OpenFlow Network @NCHC

TWAREN VPLS

OpenFlow Testbed with TWAREN VPLS (Scheduled)

OpenFlow Switch

OpenFlow Network @NCKU

OpenFlow Switch

OpenFlow Network @KUAS

OpenFlow Switch

12

slide-13
SLIDE 13

TWAREN International Circuit

13

slide-14
SLIDE 14

International OpenFlow Testbed

14

slide-15
SLIDE 15

International GENI (iGENI) Testbed

slide-16
SLIDE 16

Streaming Server

HPDMnet HPDMnet TWAREN TWAREN OpenFlow OpenFlow Testbed Testbed

NCKU KUAS

iCAIR booth booth booth

Streaming Client

booth booth

6 2 2 M b p s Streaming Server Streaming Server 10Gbps 1 G b p s 1 G b p s 1Gbps 1Gbps

Video Streaming over High Performance Future Internet

16

slide-17
SLIDE 17

HPDMnet Overview

 An International Consortium of Research Centers Has Formed a

Cooperative Partnership

To Address Key Challenges and Opportunities Related to Using Dynamically Provisioned Lightpaths for High Performance Digital Media (HPDM)

 Multiple Sites Require High Performance/High Volume/High Definition

Digital Media Streaming Simultaneously Among All Locations (Point-To- Multipoint, Multipoint-To-Point, Multi-Point to Multi-Point)

 This Consortium Is Designing and Developing New L1/L2 Capabilities

That Can Provide Large Scale HPDM Services, Which Can be Used for Any Data Intensive Application, Not Just Digital Media

17

slide-18
SLIDE 18

HPDMnet Consortium Member

CANARIE

Communications Research Centre (CRC) Canada

Electronic Visualization Laboratory(EVL), University of Illinois at Chicago

I2Cat

Inocybe

Institute of Computer and Network Engineering, TechnischeUniversitä

Carolo-Wilhelmina zuBraunschweig

International Center for Advanced Internet Research (iCAIR), Northwestern University

Korea Institute of Science and Technology Information (KISTI)

National Center for High-Performance Computing (NCHC) Taiwan

National Center for Supercomputing Applications (NCSA), University of Illinois

at Urbana-Champaign

NetherLight

Nortel

SARA

StarLight

SURFnet

Synchromedia

The BraunschweigUniversity of Art

University of Essex

University Van Amsterdam

18

slide-19
SLIDE 19

HPDMnet Layer1 Topology

19

slide-20
SLIDE 20

HPDMnet Layer2 Topology

20

slide-21
SLIDE 21

Lessons Learned

 Video transferred over FI testbed is not as smooth

as over legacy Internet.

 There are mosaics appearing every second.

slide-22
SLIDE 22

 Because IGMP is not supported in OpenFlow, we

have to manually insert multicast streaming flows into the flow table.

slide-23
SLIDE 23
slide-24
SLIDE 24

Future Work

 Extend FI Testbed  Inter-OFCloud Control and Monitoring Development

with Domestic Universities

slide-25
SLIDE 25

Future Internet Testbed @ Taiwan

OpenFlow Network at NTUST OpenFlow Network at KUAS OpenFlow Network at NCKU OpenFlow Network at iCAIR /Chicago (iGENI) OpenFlow Network at NCHC

Capsulator @NTUST

TWAREN VPLS VPN

OpenFlow Network at NCU OpenFlow Network at CHT-TL

slide-26
SLIDE 26

Monitoring on Multi-OFCloud

OF Cloud @NCHC Monitoring Console OF Cloud @NCKU OF Cloud @NTUST OF Cloud @KUAS OF Cloud @NCU

Controller Controller Controller Controller Controller

Manage Plane Data Plane

slide-27
SLIDE 27

Inter-Cloud Control and Monitoring

 Each Cloud has its own OF Controller

 Each Controller manages topology and flow

provisioning inside the cloud

 Inter-Cloud flow could be made by connecting

partial flows provisioned by controllers of each cloud

 Lack of global view for inter-cloud flows  No loops allowed for inter-cloud topology  Difficult to support QoS or SLA functions across clouds

slide-28
SLIDE 28

28

Thank You