Build Production Ready Container Platform Based on Magnum and - - PowerPoint PPT Presentation

build production ready container platform based on magnum
SMART_READER_LITE
LIVE PREVIEW

Build Production Ready Container Platform Based on Magnum and - - PowerPoint PPT Presentation

Build Production Ready Container Platform Based on Magnum and Kubernetes Bo Wang bo.wang@easystack.cn HouMing Wang houming.wang@easystack.cn Magnum weakness Features for production ready container platform Private docker


slide-1
SLIDE 1

Build Production Ready Container Platform Based

  • n Magnum and Kubernetes

Bo Wang bo.wang@easystack.cn
 HouMing Wang houming.wang@easystack.cn

slide-2
SLIDE 2

Contents

Magnum weakness Cluster initialization Mapping keystone user to Harbor How to integrate features into Magnum Features for production ready container platform Private docker image registry CI/CD Tools Service discovery Monitor and Alarm Log collection and search

slide-3
SLIDE 3

Magnum in M release

slide-4
SLIDE 4

Magnum after N release

remove container operations, act as container infrastructure management service

slide-5
SLIDE 5

Fuctions of Magnum

clustertemplate CREATE/DELETE/UPDATE cluster CREATE/DELETE/UPDATE cluster ca SIGN/ROTATE quota CREATE/DELETE/UPDATE it’s really not enough to meet customers’ requirements.

slide-6
SLIDE 6

Use Case ---- CI/CD

push commit trigger deploy deploy

product launch

push image Test Zone Production Zone

slide-7
SLIDE 7

Private image registry

Why private image registry security: proprietary code or confidential information, vulnerability analysis network issue: slow-speed network; Great Firewall internal private cloud: no access to internet Functions of Harbor private/public projects images isolation among projects user authentication

slide-8
SLIDE 8

CI/CD tools

Why continuous integration and continuous deployment tools Build faster, Test more, Fail less CI/CD tools help to reduce risk and delivery reliable software in short iterations. CI/CD has become one of the most common use cases of Docker early adopters. Functions of Jenkins dynamic generate pod slave pipleline commit trigger timed task lots plugins ...

slide-9
SLIDE 9

Internal DNS

example: service_A -----> service_B, without dns you must do it in following order create service_A get the clusterIP create service_B with the clusterIP as parameter Why kube-dns cluster ip is dynamic, service name is permanent

slide-10
SLIDE 10

Service Discovery

access service inside with internal DNS

Node 1 Node 2

pod_A_1 pod_B_1 pod_A_2 pod_B_2 iptables iptables endpoint_2 endpoint_1 kube_dns service_B clusterIP clusterIP

slide-11
SLIDE 11

Internal DNS

Kubernetes DNS pod holds 3 containers: kubedns, dnsmasq and healthz. The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve performance. The healthz container provides a single health check endpoint while performing dual healthchecks (for dnsmasq and kubedns). The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the kubelet passes DNS configured using the --cluster-dns=10.0.0.10 flag to each container.

slide-12
SLIDE 12

Service Discovery

access Loadbalancer service outside

Node 1

app1_a iptables

Node 2

app1_b iptables

Node 3

app1_c iptables NodePort NodePort NodePort CLoud LB nodeIp:port clusterIP:port podIP:port VIP:port

slide-13
SLIDE 13

Ingress Controller

access service outside with ingress controller

Node 1

app1_a

Node 2

app1_b

Node 3

app1_c service_url: port podIP:port service_url:port Ingress Controller Ingress Controller Ingress Controller

slide-14
SLIDE 14

Ingress

Ingress ressource pointing to that service: Service my-wordpress-svc with 2 pods ingress is kubernetes resource which map url path to service:port

slide-15
SLIDE 15

Ingress Controller

ingress controller is a reverse proxy, forward url to endpoint Watch the Kubernetes API for any Ingress ressource change Update the configuration of the controller (create/update/delete endpoint, SSL certificate) Reload the configuration Ingress Controller will detect ingress resources, fetch all the endpoints of the service do not occupy ports on nodes support TLS access to service configurable loadbalance strategy

slide-16
SLIDE 16

Ingress Controller

nginx configuration

slide-17
SLIDE 17

Monitor and Alarm

Cadvisor node exporter Cadvisor node exporter Cadvisor node exporter Promethues altermanager altera node1 node2 node3 metrics alarm event

slide-18
SLIDE 18

Monitor and Alarm

cadvisor collect containers running info node-exporter collect nodes running info Promethues extract info from each node metrics: node cpu usage node memory usage node filesystem usage node network I/O rates container cpu usage container memory usage container network I/O rates

slide-19
SLIDE 19

EFK

fluentd elasticsearch kibana node1 node2 node3 fluentd fluentd

slide-20
SLIDE 20

Cluster Architecture

Master EFK Slave Kube-DNS Ingress controller Promethues cluster

public network

Jenkins Master

slave slave

private network

VM deploy

keystone user harbor admin keystone user

slide-21
SLIDE 21

Cluster initialization

Share one harbor wiht multiple clusters to share public projects with all users Use heat to create nova instance and do configure to run harbor For each cluster: Jenkins master runs as service. Jenkins salve run as pod dynamically to create/delete kube-dns runs as service with three containers running in backend ingress controller runs as rc with one(more) replicas with default-backend-service node-exporter runs as daemon set on each node Promethues runs as serverice fluentd runs as daemon set on each node elasticsearch runs as service kibana runs as service

slide-22
SLIDE 22

Mapping keystone user to harbor

Dashboard Magnum-api Harbor One keystone project ----> harbor user One keystone project ----> harbor project create user/project keystone user get images push image

slide-23
SLIDE 23