cloud infrastructure planning
play

Cloud Infrastructure Planning Chapter Six Topics Key to successful - PowerPoint PPT Presentation

Cloud Infrastructure Planning Chapter Six Topics Key to successful cloud service adoption is an understanding of underlying infrastructure. Topics Understanding cloud networks Leveraging automation and self-service Understanding


  1. Cloud Infrastructure Planning Chapter Six

  2. Topics Key to successful cloud service adoption is an understanding of underlying infrastructure. Topics  Understanding cloud networks  Leveraging automation and self-service  Understanding federated cloud services  Achieving interoperability

  3. Understanding Cloud Networks Cloud networks provide:  Scalability  Expand to meet variable requirements  Resiliency  Remain accessible even in the event of loss of power or a network device.  Throughput  Support the transfer of large amounts of data, particularly between cloud hosting servers.  Simplified management  Resources allocation and reallocation simple enough that the consuming organization can easily manage configuration and changes.

  4. Open Systems Interconnection Model  Each logical layer has specific functionality, described in Table 6.1 (next slide).  Private cloud networking is commonly implemented using Layer 2 or Layer 3 technology (or a combination of both).  Much debate regarding which is the better choice.

  5. Layer 2 Cloud Networks  In a Layer 2 network, elements of the cloud network infrastructure share the same address space (the same network subnet, allowing all addresses to receive broadcasts and service announcements from all others)  Interconnect directly through locally switched networking without the need for routers to pass data between participating devices and services.  Can be easier to manage because all IP and MAC addresses share a common network communication partition.  Customers don’t need to modify their network settings to transition to cloud-hosted service alternatives.  But Layer 2 clouds can be overwhelmed if devices are oversubscribed to the point that they begin to compete for network bandwidth until they become congested.

  6. CSMA/CD  Carrier Sense Multiple Access with Collision Detection access control allows multiple devices to share the same network segment by transmitting a packet of data and then checking to see if there is another transmission at the same time by another device.  When a collision occurs, both devices wait a random amount of time before resending packet.  When a network becomes oversubscribed, it has so many devices that collisions are detected very regularly.  Delays in data exchange begin to impede data exchange and service availability.  Segmenting a network using Layer 3 routers can help to reduce competition by reducing the number of neighbors with which a device will share the same network segment.

  7. Layer 3 Cloud Networks  In a Layer 3 network, cloud resources are interconnected through routers  Allows resources to be located across multiple address ranges and in multiple locations.  Can bridge resources between locations and require an understanding of subnetwork structure to properly separate groups of devices into manageable “neighborhoods” to reduce competition and data collisions between devices.  With subnetting, Layer 3 cloud resource counts can be expanded to include a virtually unlimited number of devices.

  8. Routed Subnetting  Routed subnetting breaks the network into many subnetworks  Similar to neighborhoods of homes broken up by separate feeder roads so that all traffic does not have to share the same access route.  Layer 3 networking also allows widely separated network subnets to exchange data, routing packets across public or private network connections more like telephone calls, which can establish connections between devices in different area codes to connect offices in different locations.

  9. Combined Layer 2/3 Cloud Networks  To bridge separated network address ranges using Layer 3 routing while also taking advantage of the simplicity of Layer 2 device interconnection and discovery, it is possible to implement combination networks that use Layer 3 routing to create virtual Layer 2 network connections.  These combination networks essentially create network bridges that can transparently route data between different subnets while allowing Layer 2 device broadcasts and services announcements to be detected by all devices across all linked subnets.

  10. Internet Protocol Version  The OSI model is a simplified organization of the basic layers of networking that form the Internet and other TCP/IP networks.  Both publicly routed (Internet) and private (used only inside an organization).  Currently, the Internet is in transition from Internet Protocol version 4 (IPv4) to Internet Protocol version 6 (IPv6)  So are cloud service providers.  IPv4 addresses are 32 bits long (4 bytes)  IPv6 addresses are 128 bits long

  11. IPv6 improvements over IPv4  Removes broadcasting  Reduces network congestion  Improved routing speed  Automatically generated host identifier that eliminates the possibility of IP address conflict.  Organizations considering moving to the cloud may want to also have a plan for transitioning to IPv6, or at running both IPv4 and IPv6 until they are able to make the full transition.

  12. Network Challenges  Latency is biggest cloud network challenge.  Network latency is the amount of time it takes for data to get from one network node to another. Following contribute to latency:  Network node count  Using an inadequate number of network devices such as switches and routers can cause latency.  Number of hops  The more nodes packets traverse, the greater the potential delay.  A cloud network should include multiple paths between endpoints and a mechanism to leverage connectivity across as few devices as possible.

  13. Transport Protocol Latency  High-throughput networks between cloud devices may require alternative transport protocols, such as Fibre Channel or InifiniBand.  Have bandwidth capabilities exceeding those of more common switched Ethernet network interconnects.  Cloud networks often bear much in common with networks used in high-performance computing environments due to the higher level of resource utilization.

  14. Network congestion  Both number of network devices and available bandwidth influence network congestion.  Modern internetworking protocols (Ethernet) operate using a Carrier Sense Multiple Access (CSMA) mechanism to share the same network medium.  Internetworking protocols with collision detection (CSMA/CD) or collision avoidance (CSMA/CA) improve performance by detecting when multiple devices are trying to communicate at the same time, applying a random delay to each before attempting a retransmission.  When too many devices are connected to the same network segment, collisions become more numerous and lead to congestion between devices.

  15. Infrastructural Changes  In traditional data centers, shown in Figure 6.2 (next slide), the bulk of network communication passes from local access interconnects up through aggregation devices to core high-bandwidth network paths  Many of which may implement wide area network (WAN) protocols in favor of local area network (LAN)alternatives.  When connectivity between resources over the public Internet is required, data communication passes through a gateway bridging the core network and the Internet service provider’s connection.  Traditional data center internetworking connections generally do not consume the full bandwidth available.  Cloud resource pools are shared and interoperate across many host servers, requiring a much higher degree of continuous and sustained communication at the same networking level.  In networks developed for cloud service interconnections, the layering of network devices is reduced and protocol separation is simplified.

  16. Reducing Congestion  Done by connecting a limited number of devices to high- speed “leaf layer” devices that can handle direct switching between local devices and data pass-through to even higher bandwidth spine connections  Might involve newer 40 GB or even 100 GB connections at the time of this writing.  When the aggregation process is eliminated, and the hop count of device layering, network latency is reduced and data is more rapid in direct exchange between cloud data center devices.  Network broadcast isolation at the leaf layer reduces congestion  Transferring the bulk of data exchange from a vertical transition across the traditional data center network to a horizontal transfer between cloud service host devices.  Because each leaf handles only a few racks worth of servers, device oversubscription is eliminated and total device count capacity is greatly expanded.  Reduction of device count between any two points also reduces network latency.

  17. Leveraging Automation and Self- Service  One of the essential characteristics of cloud services is self-service provisioning.  Virtual servers, applications, storage, and other services provisioned by user organization on demand.  Figure 6.3 (next slide) shows an example of self-service provisioning using Microsoft Azure, configuring a new Windows Server 2012 virtual machine with two CPU cores and 3.5 GB of allocated RAM.  Other options presented at the left of the same interface allow the provisioning of cloud services, SQL databases, data storage pools, and virtual networks within the Azure pool of resources.

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend