emea p ress and sp s ummit
play

EMEA P RESS AND SP S UMMIT Draft Special Guest Speaker - PDF document

N ET E VENTS EMEA P RESS AND SP S UMMIT Draft Special Guest Speaker Presentation: Datacentre Interconnection and the Need for Speed Speakers: Nicolas Fischbach Director of Strategy, Architecture & Innovation, Colt Mike Capuano VP


  1. N ET E VENTS EMEA P RESS AND SP S UMMIT Draft Special Guest Speaker Presentation: Datacentre Interconnection and the Need for Speed Speakers: Nicolas Fischbach Director of Strategy, Architecture & Innovation, Colt Mike Capuano VP Corporate Marketing, Infinera Manek Dubash – Editorial Director, NetEvents Good morning, NetEvents. Morning. Did you have a good time last night? Yes. Good. Okay, so to kick off without further ado, I'm delighted to welcome our two keynote speakers, Nicolas Fischbach from Colt and Mick Capuano from Infinera. And we're going to be talking about the need for speed, datacentre interconnect. Come on down. Mike Capuano So I guess we're going to square off. Nicolas Fischbach Yes, that's what we do in [picture] dynamics. It's me with a tie so don't expect more. I think we can fight, but it's very interesting because we have a long-term relation with your guys and it's more friendship than fighting so it's going to just be like it is. What I want to talk about a little bit in the context of datacentre interconnect and what Infinera is launching, announcing, is how we at Colt see the datacentre evolve. We've been around for 20 years, as I mentioned the other day. And we own actually 20 datacentres today spread across Europe, which you'll see on the map. And it's more for the guys at the back, but if I point over here, but all of these [three elements] you can see here are datacentres that we have which are spread over Europe. Some cities have one. But other cities, you look at London, they have three datacentres. They're all pretty large. Some of them actually filled up. Quinta do Lago, Portugal – 24-25 September 2014 1

  2. EMEA Press and SP Summit NetEvents And everything you see in green on this picture, with the exception of the links going west to the US, those are all [pulled] by Infinera. So we've been deploying Infinera DTN equipment in 2009, if I'm not mistaken. And this year we started to roll out the DTN-X, which is the next-generation platform. And we started to do it in Germany over here. So the sub rings – what we call sub rings here, which is the fibre that sits in the ground that we own, it's being powered and given up by Infinera DTN-X to deliver 100gig services at pace and not 10gig any more like it was in the past. And you see that, we go through a lot of evolutions there. At the bottom here I mentioned some of the things we participate in. So in MEF, you heard mentioned yesterday, the ONF, the Open Networking Foundation as well as the [NRV] we're on. So really trying to drive change. When we can pave the way, and sometimes we do it in partnership with those guys. Sometimes it's also [the room], so do not hate me. So what we see, okay, that's the widescreen format, but basically to focus a little bit on datacentre connectivity both internally and externally. This is the evolution we've seen at Colt. But it applies pretty well to a number of datacentre operators. Do they own a network? Yes or no? Are they pure providers? Do they depend on others? Overall I think that's a picture you see happening across the world. It's not Europe- specific. You'll probably see something very similar in the US, probably more work to those in the US than in Europe in expanding. In Asia-Pac they are more driven by high bandwidth. But this kind of combination reflects really where we are today. So back in the day, external connect with datacentre was purely internet. There was only client-to-server access on the internet. And everything else, like back-up storage and so on, was allocated high-speed services for some inside the datacentres. I'm talking about the stuff here on the left. What's the campus model? If you all remember, the Cisco campus model, three tiers, [co-edge] access. It was good in the early days because all the [CCIAs] knew exactly how to operate it. The problem was you are [inaudible] for the [calls]. You have the limitations of [VDANS]. I think you've heard the story for a very long time. Most of the connectivity was 1gig and the physical part was 1gig core or sometimes 1gig fibre. That's where we were early 2000s to probably late 2000s, early 2010s. And then came the model we are in now. I'm saying now mostly to give a reference because plus/minus depending on where you are and who you are. So people who've switched to IP VPN, they use a lot of Ethernet with the growth of carrier Ethernet. The internets going to [get away] because it still needs the consumer to be able to access. You see a lot of dedicated high-speed services. And inside this datacentre you see the first evolution of people moving away from this campus model to this fabric model, you know, [lease pie] model, using an SDN overlay on top of it to address connectivity models, 10gig ports. Most of the people are somewhere here today. And then that's where we're going. Internet's not going away. Ethernet still pretty strong in the middle space. There's evolution where you need an optical datacentre interconnect because there's much more east/west traffic. When you have more than Quinta do Lago, Portugal – 24-25 September 2014 2

  3. EMEA Press and SP Summit NetEvents one datacentre in a single reach and you need to interconnect them, not just for back- up and restore but also to address this now east/west traffic demand from customers. Also not between new datacentres but also with carrier [hotels], with the customers' own datacentre that they want to mix probably private cloud or IP cloud models. So that's pretty important. And then inside the datacentre at the bottom, you see this trend of going to a free software-defined datacentre. We're not there yet. [Inaudible] might tell you we are. That's still in the works. There's a lot of transformation going on in storage back-up, compute and so on, connectivity-wise. You're moving to either 40gig or 100gig depending on if you're going to make the leap or not. And there's much more constraint also on the power and what you can see is technical infrastructure. How do you address this concentration of heat in the airflow system, the number of users? There are physical space constraints and so on. So that's very important. That's getting where Mike is going to explain what they do about it. And finally that's just a summary picture on how we see all this stack up. I've put the pictures of two datacentres, using this [lease pie] model, which that's the model you have with the compute sitting there. What we have is an integrated layer two/layer three environment that provides connectivity. And what we'll address today is this layer one optical domain network. I've put these links in red or actually orange here. And this is coming. Depending on where you are, Asia-Pac, there's a big trend. I think the US works to those. In Europe, my opinion is that it's just starting. So it's probably the perfect timing for you guys to launch this. And where we're taking this is we just want to do more than just having physical links. This one is you actually want to automate and orchestrate this. So it's not any more about just having the point-to-point high-value services that are configured once, not being touched any more that cannot evolve. You want to make sure that whatever you put up here can be driven by what sits down here. So the whole integration, is it basic automation using scripts? Is it integration application? Is it SDN model? It really depends on the environment. But the bandwidth flexing requirements and the capacity management, the QS experience up here is super important. And you know what the people want to do in the end is really drive all of this to build this what we call software-defined datacentres. These are all the building blocks, in our view, that compose it. And what I've done here is just highlight one that people sometimes forget is this datacentre interconnect piece, which, Mike, you're going to address. Over to you. Mike Capuano Great. Thanks. Thanks, Nico. Good context. So the title is datacentre interconnection and the need for speed. I'm going to just start at a high level to show you what we mean by east/west traffic. So this is an example of an internet content provider, Facebook. If any of you happen to use Facebook or your kids use Facebook, you go and you want to see your page, your homepage, you send a 1KB request, ACTP request up to the Facebook datacentre. So that's kind of the user to the cloud. And then once you get inside the Quinta do Lago, Portugal – 24-25 September 2014 3

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend