running kubernetes on openstack and bare metal
play

Running Kubernetes on OpenStack and Bare Metal OpenStack Summit - PowerPoint PPT Presentation

Running Kubernetes on OpenStack and Bare Metal OpenStack Summit Berlin, November 2018 Ramon Acedo Rodriguez Product Manager, Red Hat OpenStack Team @ramonacedo | racedo@redhat.com Bare Metal On-Trend Bare Metal On-Trend Among users who run


  1. Running Kubernetes on OpenStack and Bare Metal OpenStack Summit Berlin, November 2018 Ramon Acedo Rodriguez Product Manager, Red Hat OpenStack Team @ramonacedo | racedo@redhat.com

  2. Bare Metal On-Trend

  3. Bare Metal On-Trend Among users who run Kubernetes on OpenStack, adoption of Ironic is even stronger with 37% relying on it. OpenStack User Survey 2018 OpenStack User Survey 2017

  4. Bare Metal On-Trend blog.openshift.com/kubernetes-on-metal-with-openshift Popular Use Cases Kubernetes on Bare Metal High-Performance Computing Direct Access to Dedicated Hardware Devices Big Data and Scientific Applications

  5. Why Kubernetes on OpenStack Particularly, on OpenStack Bare Metal

  6. Why Kubernetes on OpenStack kubernetes WORKLOAD DRIVEN DEEPLY INTEGRATED PROGRAMMATIC SCALE-OUT ACROSS Datacentre INFRASTRUCTURE

  7. OpenStack Bare Metal Ironic Introduction

  8. OpenStack Ironic Hardware Lifecycle Management Hardware Inspection Servers and Network Switches (via LLDP) OS Provisioning Supporting qcow2 images Routed Spine/Leaf Networking Provision over routed networks Multi-Tenancy ML2 Networking Ansible plug-in Node Auto-discovery Broad BMC Support Redfish, iDrac, iRMC, iLo, IPMI, oVirt, vBMC

  9. OpenStack Ironic Simple Architecture Highly Available Run multiple Ironic instances in HA Mixed VMs and Bare Metal Instances Simply add Nova compute nodes

  10. OpenStack Admin Workflow Register Create Create Upload Bare Metal Networks Flavors Images Nodes

  11. OpenStack Tenant Workflow Select Start VM Select Start BM OS and Instances Network Instances Flavor

  12. OpenStack Bare Metal Ironic and OpenStack Features

  13. OpenStack Ironic Bare Metal Ironic Multi-Tenant with Isolation Between Tenants L2 Switch Dedicated Provider Networks VLANs set by Instead of a shared flat network by ML2 plug-in NIC NIC Provisioning Over an Isolated, BM BM Dedicated Network Physical Switch Ports Dynamically Configured L2 Switch L2 Switch At deployment time and on termination Configured by ML2 plug-in LAG Support for Neutron Port NIC NIC Groups and Security Groups bond Configured by For Link Aggregation and switch ACLs cloud-init using BM metadata

  14. Multi-tenant Bare Metal as a Service Upstream Docs Multi-Tenancy https://docs.openstack.org/ironic/latest/admin/multitenancy.html https://docs.openstack.org/ironic/latest/install/configure-tenant-networks.html Port Groups / Bonds https://docs.openstack.org/ironic/latest/admin/portgroups.html

  15. OpenStack Ironic Bare Metal ML2 Networking Ansible blogs.rdoproject.org/2018/09/networking-ansible Neutron ML2 Networking Boot BM on ML2 Plug-in ML2 Plug-in BM is Tenant Configures Configures BM is ready Ansible Driver Provisioned Network Switch Switch Multiple Switch Platforms in a Single ML2 Driver Provisioning Network is Tenant Network is configured in the switch configured in the switch Leveraging the Networking Ansible modules L2 Switch New in OpenStack Rocky NIC NIC BM BM

  16. OpenStack Ironic Bare Metal L3 Routed Networks (Spine/Leaf Network Topologies) spine switch spine switch spine switch L3 routed L3 routed networks networks L3 Spine and Leaf Topologies ToR/leaf switch ToR/leaf switch ToR/leaf switch Ironic provisioning bare metal nodes over DHCP Relay DHCP Relay DHCP Relay routed networks Ironic Node Bare Metal Bare Metal Ironic Node Bare Metal Bare Metal DHCP Relay Allowing PXE booting over L3 routed Ironic Node Bare Metal Bare Metal networks Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal

  17. OpenStack Bare Metal Ironic Inspector Nodes Auto-Discovery E.g: Use the the idrac driver and its credentials if a Dell node is detected Use Rules to Set Node Data collected cat > rules.json << EOF Properties during inspection [ { E.g. set Ironic driver (iDrac, Redfish…) based "description": "Set the vendor driver for Dell hardware", on inspection data, set BMC credentials, "conditions": [ etc. {"op": "eq", "field": "data://auto_discovered", "value": true}, {"op": "eq", "field": " data://inventory.system_vendor.manufacturer ", "value": " Dell Inc. "} Just Power On the Nodes ], Nodes PXE boot from the provisioning "actions": [ {"action": "set-attribute", "path": " driver ", "value": " idrac "}, network used by Ironic {"action": "set-attribute", "path": "driver_info/drac_username", "value": "root"}, Automatic Node Inspection {"action": "set-attribute", "path": "driver_info/drac_password", "value": "calvin"}, Nodes boot from the network and their {"action": "set-attribute", "path": "driver_info/drac_address", hardware is inspected "value": "{ data[inventory][bmc_address] }"} ] Automatically Registered with } ] Ironic EOF After inspection they are registered with $ openstack baremetal introspection rule import rules.json Ironic and ready to be deployed

  18. OpenStack Bare Metal Redfish Support in Ironic API-driven Remote Management Platform openstack baremetal node create \ Manage large amounts of physical nodes --driver redfish \ via API. redfish.dmtf.org --driver-info redfish_address=https://example.com \ --driver-info redfish_system_id=/redfish/v1/Systems/CX34R87 \ Included in Modern BMCs --driver-info redfish_username=admin \ Most vendors support Redfish in the latest --driver-info redfish_password=password models Supported in Ironic Introduced in Pike along with the Sushy library OpenStack Stain Addition Out-of-band inspection of nodes, boot from virtual media (without DHCP) and BIOS configurations

  19. OpenStack Bare Metal Ironic BIOS Configuration docs.openstack.org/ironic/latest/admin/bios.html Get and Set BIOS Settings [{ Retrieve and apply BIOS settings via CLI or REST API. The desired BIOS settings are "name": "hyper_threading_enabled”, applied during manual cleaning. "value": "False" }, Settings Applied During Node { Cleaning "name": "cpu_vt_enabled", The desired BIOS settings are applied "value": "True" during manual cleaning }]

  20. OpenStack Bare Metal Multi-Site Central Site Site A Ironic Controller Ironic Controller Ironic Conductor and Node Ironic Controller Grouping Affinity Using the conductor/node grouping affinity spec Site B Each Ironic Conductor Ironic Conductor Bare Metal Bare Metal Site D Manages a Group of Nodes Ironic Conductor Bare Metal Bare Metal Bare Metal Bare Metal No need to expose access to BMC (e.g. Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal IPMI. Redfish, iDrac, iRMC) to the central site Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal PXE boot or Virtual Media Bare Metal Bare Metal ... Provisioning We will be able to boot nodes without DHCP (see spec Ironic L3 based Site C deployment) Ironic Conductor Bare Metal Bare Metal Bare Metal

  21. Kubernetes on OpenStack and Bare Metal Deployment of Kubernetes on the metal

  22. Kubernetes on Bare Metal Deploy Kubernetes on OpenStack Ironic-managed bare metal nodes Deploy OpenStack with Ironic OpenStack with OpenStack Installer 1 Ironic 2 Kubernetes Cluster Deploy Kubernetes Master Node Infra Node Kubernetes Installer 3 Worker Node

  23. Kubernetes with OpenShift Workflow to Install an OpenShift Cluster on Bare Metal docs.openshift.com/container-platform/3.11/getting_started/install_openshift.html DNS entries with Provision Bare Metal Nodes wildcard for apps Ironic provisions the OS image and configures the network Add DNS Entries Wildcard DNS for container apps and fully-qualified names for the nodes Distribute SSH keys Cluster nodes need to access each other Cluster passwordless Installation Install with the OpenShift Ansible Installer Install the openshift-ansibe installer on an admin node and point it to the bare metal nodes

  24. TripleO-deployed Kubernetes Cluster OpenShift to the Rescue

  25. Kubernetes on Bare Metal Provision nodes and deploy Kubernetes with Ironic in TripleO New in Rocky! Deploy an OpenShift/OKD Kubernetes Cluster cluster and a GlusterFS on bare metal nodes Master Nodes Infra Nodes TripleO Node integrates openshift-ansible Worker Nodes

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend