Running Kubernetes on OpenStack and Bare Metal OpenStack Summit - - PowerPoint PPT Presentation

running kubernetes on openstack and bare metal
SMART_READER_LITE
LIVE PREVIEW

Running Kubernetes on OpenStack and Bare Metal OpenStack Summit - - PowerPoint PPT Presentation

Running Kubernetes on OpenStack and Bare Metal OpenStack Summit Berlin, November 2018 Ramon Acedo Rodriguez Product Manager, Red Hat OpenStack Team @ramonacedo | racedo@redhat.com Bare Metal On-Trend Bare Metal On-Trend Among users who run


slide-1
SLIDE 1

Running Kubernetes on OpenStack and Bare Metal

OpenStack Summit Berlin, November 2018 Ramon Acedo Rodriguez Product Manager, Red Hat OpenStack Team @ramonacedo | racedo@redhat.com

slide-2
SLIDE 2

Bare Metal On-Trend

slide-3
SLIDE 3

Bare Metal On-Trend

OpenStack User Survey 2017

Among users who run Kubernetes on OpenStack, adoption of Ironic is even stronger with 37% relying on it.

OpenStack User Survey 2018

slide-4
SLIDE 4

Popular Use Cases Kubernetes on Bare Metal High-Performance Computing Direct Access to Dedicated Hardware Devices Big Data and Scientific Applications blog.openshift.com/kubernetes-on-metal-with-openshift

Bare Metal On-Trend

slide-5
SLIDE 5

Why Kubernetes on OpenStack

Particularly, on OpenStack Bare Metal

slide-6
SLIDE 6

Why Kubernetes on OpenStack

Datacentre

WORKLOAD DRIVEN PROGRAMMATIC SCALE-OUT ACROSS INFRASTRUCTURE DEEPLY INTEGRATED

kubernetes

slide-7
SLIDE 7

OpenStack Bare Metal

Ironic Introduction

slide-8
SLIDE 8

OpenStack Ironic

Hardware Lifecycle Management Hardware Inspection

Servers and Network Switches (via LLDP)

OS Provisioning

Supporting qcow2 images

Routed Spine/Leaf Networking

Provision over routed networks

Multi-Tenancy

ML2 Networking Ansible plug-in

Node Auto-discovery Broad BMC Support

Redfish, iDrac, iRMC, iLo, IPMI, oVirt, vBMC

slide-9
SLIDE 9

OpenStack Ironic

Simple Architecture Highly Available

Run multiple Ironic instances in HA

Mixed VMs and Bare Metal Instances

Simply add Nova compute nodes

slide-10
SLIDE 10

Register Bare Metal Nodes

OpenStack Admin Workflow

Create Networks Create Flavors Upload Images

slide-11
SLIDE 11

OpenStack Tenant Workflow

Select Network

Start VM Instances Start BM Instances

Select OS and Flavor

slide-12
SLIDE 12

OpenStack Bare Metal

Ironic and OpenStack Features

slide-13
SLIDE 13

OpenStack Ironic Bare Metal

Ironic Multi-Tenant with Isolation Between Tenants Dedicated Provider Networks

Instead of a shared flat network

Provisioning Over an Isolated, Dedicated Network Physical Switch Ports Dynamically Configured

At deployment time and on termination

Support for Neutron Port Groups and Security Groups

For Link Aggregation and switch ACLs

L2 Switch BM

NIC NIC LAG bond

Configured by ML2 plug-in Configured by cloud-init using metadata

L2 Switch BM

NIC

VLANs set by by ML2 plug-in

BM

NIC

L2 Switch

slide-14
SLIDE 14

Multi-Tenancy

https://docs.openstack.org/ironic/latest/admin/multitenancy.html https://docs.openstack.org/ironic/latest/install/configure-tenant-networks.html

Port Groups / Bonds

https://docs.openstack.org/ironic/latest/admin/portgroups.html

Multi-tenant Bare Metal as a Service

Upstream Docs

slide-15
SLIDE 15

OpenStack Ironic Bare Metal

ML2 Networking Ansible Neutron ML2 Networking Ansible Driver Multiple Switch Platforms in a Single ML2 Driver

Leveraging the Networking Ansible modules

New in OpenStack Rocky

Provisioning Network is configured in the switch Boot BM on Tenant Network ML2 Plug-in Configures Switch BM is Provisioned ML2 Plug-in Configures Switch Tenant Network is configured in the switch BM is ready

L2 Switch BM

NIC

BM

NIC

blogs.rdoproject.org/2018/09/networking-ansible

slide-16
SLIDE 16

spine switch Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal spine switch spine switch

L3 routed networks

ToR/leaf switch

Bare Metal Ironic Node Ironic Node Ironic Node Bare Metal

ToR/leaf switch

ToR/leaf switch

DHCP Relay DHCP Relay DHCP Relay

L3 routed networks

OpenStack Ironic Bare Metal

L3 Routed Networks (Spine/Leaf Network Topologies) L3 Spine and Leaf Topologies

Ironic provisioning bare metal nodes over routed networks

DHCP Relay

Allowing PXE booting over L3 routed networks

slide-17
SLIDE 17

OpenStack Bare Metal

Ironic Inspector Nodes Auto-Discovery Use Rules to Set Node Properties

E.g. set Ironic driver (iDrac, Redfish…) based

  • n inspection data, set BMC credentials,

etc.

Just Power On the Nodes

Nodes PXE boot from the provisioning network used by Ironic

Automatic Node Inspection

Nodes boot from the network and their hardware is inspected

Automatically Registered with Ironic

After inspection they are registered with Ironic and ready to be deployed

cat > rules.json << EOF [ { "description": "Set the vendor driver for Dell hardware", "conditions": [ {"op": "eq", "field": "data://auto_discovered", "value": true}, {"op": "eq", "field": "data://inventory.system_vendor.manufacturer", "value": "Dell Inc."} ], "actions": [ {"action": "set-attribute", "path": "driver", "value": "idrac"}, {"action": "set-attribute", "path": "driver_info/drac_username", "value": "root"}, {"action": "set-attribute", "path": "driver_info/drac_password", "value": "calvin"}, {"action": "set-attribute", "path": "driver_info/drac_address", "value": "{data[inventory][bmc_address]}"} ] } ] EOF $ openstack baremetal introspection rule import rules.json

Data collected during inspection

E.g: Use the the idrac driver and its credentials if a Dell node is detected

slide-18
SLIDE 18

OpenStack Bare Metal

Redfish Support in Ironic API-driven Remote Management Platform

Manage large amounts of physical nodes via API. redfish.dmtf.org

Included in Modern BMCs

Most vendors support Redfish in the latest models

Supported in Ironic

Introduced in Pike along with the Sushy library

OpenStack Stain Addition

Out-of-band inspection of nodes, boot from virtual media (without DHCP) and BIOS configurations

  • penstack baremetal node create \
  • -driver redfish \
  • -driver-info redfish_address=https://example.com \
  • -driver-info redfish_system_id=/redfish/v1/Systems/CX34R87 \
  • -driver-info redfish_username=admin \
  • -driver-info redfish_password=password
slide-19
SLIDE 19

Get and Set BIOS Settings

Retrieve and apply BIOS settings via CLI or REST API. The desired BIOS settings are applied during manual cleaning.

Settings Applied During Node Cleaning

The desired BIOS settings are applied during manual cleaning

OpenStack Bare Metal

Ironic BIOS Configuration docs.openstack.org/ironic/latest/admin/bios.html [{ "name": "hyper_threading_enabled”, "value": "False" }, { "name": "cpu_vt_enabled", "value": "True" }]

slide-20
SLIDE 20

Central Site

Ironic Conductor Bare Metal Bare Metal Site B Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal ... Ironic Conductor Bare Metal Bare Metal Bare Metal Bare Metal Bare Metal Site D Ironic Conductor Bare Metal Bare Metal Bare Metal Site C Ironic Controller Ironic Controller Ironic Controller Site A

OpenStack Bare Metal

Multi-Site Ironic Conductor and Node Grouping Affinity

Using the conductor/node grouping affinity spec

Each Ironic Conductor Manages a Group of Nodes

No need to expose access to BMC (e.g.

  • IPMI. Redfish, iDrac, iRMC) to the central site

PXE boot or Virtual Media Provisioning

We will be able to boot nodes without DHCP (see spec Ironic L3 based deployment)

slide-21
SLIDE 21

Kubernetes on OpenStack and Bare Metal

Deployment of Kubernetes on the metal

slide-22
SLIDE 22

Kubernetes Cluster

Kubernetes on Bare Metal

Deploy Kubernetes on OpenStack Ironic-managed bare metal nodes

Kubernetes Installer

Master Node Infra Node Worker Node

Deploy Kubernetes

OpenStack with Ironic OpenStack Installer

1 2 3

Deploy OpenStack with Ironic

slide-23
SLIDE 23

docs.openshift.com/container-platform/3.11/getting_started/install_openshift.html

Workflow to Install an OpenShift Cluster on Bare Metal

Kubernetes with OpenShift

Provision Bare Metal Nodes

Ironic provisions the OS image and configures the network

Add DNS Entries

Wildcard DNS for container apps and fully-qualified names for the nodes

Distribute SSH keys

Cluster nodes need to access each other passwordless

Install with the OpenShift Ansible Installer

Install the openshift-ansibe installer on an admin node and point it to the bare metal nodes

DNS entries with wildcard for apps Cluster Installation

slide-24
SLIDE 24

TripleO-deployed Kubernetes Cluster

OpenShift to the Rescue

slide-25
SLIDE 25

Kubernetes Cluster

TripleO Node

integrates openshift-ansible

Master Nodes Infra Nodes Worker Nodes

Deploy an OpenShift/OKD cluster and a GlusterFS on bare metal nodes

Kubernetes on Bare Metal

Provision nodes and deploy Kubernetes with Ironic in TripleO New in Rocky!

slide-26
SLIDE 26

[stack@undercloud-0 ~]$ cat /home/stack/home/stack/openshift_env.yaml [...] OS::TripleO::OpenShiftMaster::Net::SoftwareConfig: /home/stack/master-nic.yaml OS::TripleO::OpenShiftWorker::Net::SoftwareConfig: /home/stack/worker-nic.yaml OS::TripleO::OpenShiftInfra::Net::SoftwareConfig: /home/stack/infra-nic.yaml [...] OpenShiftMasterCount: 3 OpenShiftWorkerCount: 3 OpenShiftInfraCount: 3 [...] OpenShiftInfraParameters: OpenShiftGlusterDisks:

  • /dev/sdb

[...]

Kubernetes on Bare Metal

Provision nodes and deploy Kubernetes with Ironic in TripleO Create OpenShift Roles

Master, Workers and Infra nodes in TripleO

Configure the Network Settings in TripleO

E.g. Internal, External and Storage networks and the NIC configuration for each node

Set OpenShift and GlusterFS Options

E.g. Number of nodes, disk for Gluster

Deploy with TripleO

Run the usual ‘openstack overcloud deploy’ command

[stack@undercloud-0 ~]$ cat overcloud_deploy.sh

  • penstack overcloud deploy \
  • -stack openshift \
  • -templates \
  • r /home/stack/openshift_roles_data.yaml \
  • n /home/stack/network_data.yaml \
  • e /usr/share/openstack-tripleo-heat-templates/environments/network-isolation.yaml \
  • e /usr/share/openstack-tripleo-heat-templates/environments/openshift.yaml \
  • e /usr/share/openstack-tripleo-heat-templates/environments/openshift-cns.yaml \
  • e /home/stack/openshift_env.yaml \
  • e /home/stack/containers-prepare-parameter.yaml
slide-27
SLIDE 27

Kubernetes and TripleO Integration

https://github.com/openstack/tripleo-heat-templates

slide-28
SLIDE 28

Container Storage Options for Bare Metal

GlusterFS, Manila/CephFS, NFS

slide-29
SLIDE 29

Container Storage Options for Bare Metal

GlusterFS

NFS/Manila (CephFS)

Storage Should be Highly Available

GlusterFS and CephFS provide HA

Storage Should Allow RWX Mode

Allowing ReadWriteMany is required by some apps. GlusterFS and CephFS are supported backends for RWX access mode Local HostPath

slide-30
SLIDE 30

Container Storage Options for Bare Metal

GlusterFS

Kubernetes Cluster on Bare Metal with Converged GlusterFS Storage

Master Node Infra Node Master Node Master Node Infra Node Infra Node Worker Node Worker Node Worker Node

Infra GlusterFS Cluster Apps GlusterFS Cluster

OpenStack Storage Not Required

We deploy with OpenStack (TripleO) but Kubernetes don’t use OpenStack

TripleO Deploys GlusterFS on Bare Metal

Optionally, we can request TripleO to deploy GlusterFS for the OpenShift cluster

GlusterFS Can Be Hosted On the Infra and Worker Nodes

The GlusterFS Cluster can be hosted in “converged” mode along with the Infra and Worker nodes

slide-31
SLIDE 31

Container Storage Options for Bare Metal

Manila with CephFS/NFS

Manila Provides RWX Access

PVs can be created with ReadWriteMany (RWX) access mode

Ceph as a Single Storage Backend

Manila is backed by CephFS/NFS allowing to use Ceph for OpenStack and OpenShift workloads and infra

Kubernetes Registry on Object Storage from Ceph

Ceph RadosGW configured with OpenStack for Object Storage can be used for the registry

Kubernetes Cluster on Bare Metal Consuming Storage from OpenStack Manila Backed by Ceph

Bare Metal Kubernetes OpenStack Ironic Manila Bare Metal Kubernetes Bare Metal Kubernetes Ceph Storage Ceph Storage Ceph Storage Ceph Cluster OpenStack Ironic Manila OpenStack Ironic Manila

slide-32
SLIDE 32

Networking on Bare Metal

OpenShift Networking Architecture

slide-33
SLIDE 33

Kubernetes Cluster on Bare Metal

OpenStack Cluster

Cluster Networking with Bare Metal

More info at docs.openshift.com/container-platform/3.11/architecture/networking/sdn.html

Master Node Infra Node Master Node Master Node Infra Node Infra Node Worker Node Worker Node Worker Node

Ironic Controller Ironic Controller Ironic Controller Provisioning Network Data Network Public Network Provisioning Network Data Network Public Network Provisioning Network Data Network Public Network

Load Balancers

VXLAN (Container to Container) BMC (IPMI/Redfish/iDrac, etc.)

BMC Network

Ironic manages the servers via their BMC (IPMI, Redfish, iDrac, iLO, iRMC, etc.)

Provisioning Network

When deploying from Ironic, a NIC is used to DHCP/PXE-boot. This is usually a single NIC (or one NIC from a bond with LACP fallback)

Data Network

Pod to pod traffic goes through the data

  • network. A 2-NIC bond is recommended

Open vSwitch and CNI

OVS is used for traffic flow within the cluster (pod-to-pod, and node-to-node) and ingress/egress traffic to the cluster. OVS is used as the Container Network Interface (CNI) plug-in for Kubernetes

slide-34
SLIDE 34

Thank You

Ramon Acedo Rodriguez Product Manager, Red Hat OpenStack Team @ramonacedo | racedo@redhat.com