designing nfvi architecture at the edge
play

Designing NFVi Architecture at the Edge Requirements, Challenges and - PowerPoint PPT Presentation

Designing NFVi Architecture at the Edge Requirements, Challenges and Solutions Brent Roskos , Senior Principal Telco Architect, Red Hat Jaromir Coufal, Edge Computing Product Management, Red Hat April 29, 2019 Open Infrastructure Summit - Denver


  1. Designing NFVi Architecture at the Edge Requirements, Challenges and Solutions Brent Roskos , Senior Principal Telco Architect, Red Hat Jaromir Coufal, Edge Computing Product Management, Red Hat April 29, 2019 Open Infrastructure Summit - Denver

  2. Network function virtualization infrastructure (NFVi) architecture can be complicated During this session we'll look into: ● Requirements ● Challenges ● Solutions ● Best practice ● What’s next ● Questions 2 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  3. Edge Computing Motivation Latency Bandwidth Resilience Regulations Place processing power closer to Reduce the amount of traffic Continuous operations of edge Meet standards and compliance the data source that needs to travel back to the sites in event link drop requirements data center core 3 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  4. Edge Computing Challenges Scale Environmental Expertise Architecture requires horizontal Potential inconsistent Limited to no IT expertise in scale connectivity, dust, remote sites heat, and space constraints All while controlling costs to ensure budget goals are met. 4 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  5. Edge Tiers 5 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  6. Deployment Configuration Considerations Distributed nodes Standalone cluster(s) 6 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  7. Standalone Clusters Multi-cluster deployment ● Each site has its own standalone deployment ● Complete cluster at each site (control + resource) ● Benefits Full isolation (in case of disaster) ● High redundancy and availability ● Very low impact in case of network drop out ● Complications Bigger hardware footprint (need for control plane) ● More complex management (versioning) ● 7 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  8. Distributed Compute Nodes Single cluster deployment ● Primary site has shared control plane (and resource nodes) ● Remote sites have only resource nodes ● Benefits Smaller footprint at the remote sites ● Faster to scale to new location (resource scale out) ● Easier operational management (single cluster, single config) ● Complications Control plane is still a single point of failure ● Network drop affects management of workloads ● 8 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  9. OpenStack Distributed Compute Nodes Infra Solution / Reference Architecture Red Hat has recently published a guide which may be used for distributing compute nodes to edge sites: Deploying Distributed Compute Nodes to Edge Sites Red Hat OpenStack Platform 13 allows you to implement edge computing using distributed compute nodes (DCN). With this approach, you share control plane at the primary site and deploy compute nodes to the remote location. The OpenStack services are able to tolerate the network issues that that can arise, such as connectivity and latency, among others. 9 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  10. Red Hat OpenStack Platform 13 (Queens) with distributed compute nodes ● Support: Fully Supported by Red Hat ● Use Cases: Cross-Industry ● Based on: OpenStack Queens ● Deployment Tool: director (TripleO) ● Edge Site Resources: Only Compute ● Storage: Local Ephemeral for Computes ● Networking: L3 Routed (recommended) ● Max Network Latency: 100 ms (roundtrip time) ● Network Drop Outs: Best Effort (workload operational during core-edge network loss) ● Image Sizing: No limit (network bandwidth impacts transfer time) 10 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  11. 11 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  12. Deployment & Management ● Deployed with Director/TripleO ● Undercloud at the primary site Deployment Stack Red Hat Satellite ● Container Registry at the primary site RPM Repos & Undercloud Container Registry ● Single Deployment Stack Primary Site ● An operation at the single Edge Site L3 Routed runs through the whole stack DCN Site 1 DCN Site 2 12 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  13. Computing ● Sites defined by Availability Zones OPTIONAL OPTIONAL ● VM deployed to specific location by AZ0 AZ0 scheduling to desired AZ Compute Nodes ● Primary Site can optionally also Ceph Cluster 0 (Local Ephemeral) contain Compute Nodes Primary Site ● All DCN Site Compute Nodes need to be using the same storage (local AZ1 ephemeral) ● Primary Site can have specific Compute Nodes (Local Ephemeral) Compute Role to use Cinder Volumes (backed by Ceph, etc) DCN Site 1 ● Each site is a specific Compute Role 13 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  14. Storage DCN is only available Compute services OPTIONAL OPTIONAL using local ephemeral storage. The edge AZ0 AZ0 cloud applications will need to be designed to Compute Nodes consider data availability, locality awareness, Ceph Cluster 0 (Local Ephemeral) and/or replication mechanisms. In addition, Primary Site live migration will not be available. AZ1 * Note that Compute nodes at the primary site must use same nova ephemeral Compute Nodes (Local Ephemeral) backend that the remote sites use. (soon improved) DCN Site 1 14 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  15. ● L3 Routed topology is the recommended Networking network setup ● Recommendation is to use Provider Networks over site-specific networks (complex tunnel mesh across sites) ● Optimize for network performance ○ SR-IOV Primary Site ○ OVS-DPDK L3 Routed ○ OVS-Offload (tech preview) With Review (support exception) ● Routed Provider Networks with IP & Metadata via config-drive, or with network DCN Site 1 DCN Site 2 provided DHCP relays to DHCP Agents on Controllers 15 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  16. OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  17. Ongoing Work Upstream ● Deploy each site as an independent stack ○ Better resiliency and scale ● Add ability to configure different storage backends per nova instance ○ Deploy Ceph cluster for primary side while DCN sites use local ephemeral ● Deploy multiple Ceph clusters ○ Ability to dedicate a Ceph cluster per DCN site ● Ability to combine compute & storage at the same node in the DCN site ○ Hyperconverged solution ● Increase scale of deployment ○ Deploying for large sets of edge sites Distributed DHCP and Metadata agents ● ○ Place agents at edge sites for enhanced availability and flexibility 17 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  18. Where to Learn More? ● Join the upstream community: OpenStack Edge Computing Group ● Upstream Gathered Use Cases ● Upstream Reference Architectures / Models ● Freenode - #edge-computing-group ● Mailing list ● Meetings: Regular calls in alternating slots: ○ Every first Thursday of the month: 0700 UTC ○ On other weeks Tuesdays at 7am PDT / 1500 UTC 18 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  19. Questions 19 OPEN INFRASTRUCTURE SUMMIT - DENVER 2019

  20. THANK YOU

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend