SLIDE 1 Build Production Ready Container Platform Based
Bo Wang bo.wang@easystack.cn
HouMing Wang houming.wang@easystack.cn
SLIDE 2
Contents
Magnum weakness Cluster initialization Mapping keystone user to Harbor How to integrate features into Magnum Features for production ready container platform Private docker image registry CI/CD Tools Service discovery Monitor and Alarm Log collection and search
SLIDE 3
Magnum in M release
SLIDE 4
Magnum after N release
remove container operations, act as container infrastructure management service
SLIDE 5
Fuctions of Magnum
clustertemplate CREATE/DELETE/UPDATE cluster CREATE/DELETE/UPDATE cluster ca SIGN/ROTATE quota CREATE/DELETE/UPDATE it’s really not enough to meet customers’ requirements.
SLIDE 6 Use Case ---- CI/CD
push commit trigger deploy deploy
product launch
push image Test Zone Production Zone
SLIDE 7
Private image registry
Why private image registry security: proprietary code or confidential information, vulnerability analysis network issue: slow-speed network; Great Firewall internal private cloud: no access to internet Functions of Harbor private/public projects images isolation among projects user authentication
SLIDE 8
CI/CD tools
Why continuous integration and continuous deployment tools Build faster, Test more, Fail less CI/CD tools help to reduce risk and delivery reliable software in short iterations. CI/CD has become one of the most common use cases of Docker early adopters. Functions of Jenkins dynamic generate pod slave pipleline commit trigger timed task lots plugins ...
SLIDE 9
Internal DNS
example: service_A -----> service_B, without dns you must do it in following order create service_A get the clusterIP create service_B with the clusterIP as parameter Why kube-dns cluster ip is dynamic, service name is permanent
SLIDE 10 Service Discovery
access service inside with internal DNS
Node 1 Node 2
pod_A_1 pod_B_1 pod_A_2 pod_B_2 iptables iptables endpoint_2 endpoint_1 kube_dns service_B clusterIP clusterIP
SLIDE 11
Internal DNS
Kubernetes DNS pod holds 3 containers: kubedns, dnsmasq and healthz. The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve performance. The healthz container provides a single health check endpoint while performing dual healthchecks (for dnsmasq and kubedns). The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the kubelet passes DNS configured using the --cluster-dns=10.0.0.10 flag to each container.
SLIDE 12 Service Discovery
access Loadbalancer service outside
Node 1
app1_a iptables
Node 2
app1_b iptables
Node 3
app1_c iptables NodePort NodePort NodePort CLoud LB nodeIp:port clusterIP:port podIP:port VIP:port
SLIDE 13 Ingress Controller
access service outside with ingress controller
Node 1
app1_a
Node 2
app1_b
Node 3
app1_c service_url: port podIP:port service_url:port Ingress Controller Ingress Controller Ingress Controller
SLIDE 14
Ingress
Ingress ressource pointing to that service: Service my-wordpress-svc with 2 pods ingress is kubernetes resource which map url path to service:port
SLIDE 15
Ingress Controller
ingress controller is a reverse proxy, forward url to endpoint Watch the Kubernetes API for any Ingress ressource change Update the configuration of the controller (create/update/delete endpoint, SSL certificate) Reload the configuration Ingress Controller will detect ingress resources, fetch all the endpoints of the service do not occupy ports on nodes support TLS access to service configurable loadbalance strategy
SLIDE 16
Ingress Controller
nginx configuration
SLIDE 17
Monitor and Alarm
Cadvisor node exporter Cadvisor node exporter Cadvisor node exporter Promethues altermanager altera node1 node2 node3 metrics alarm event
SLIDE 18
Monitor and Alarm
cadvisor collect containers running info node-exporter collect nodes running info Promethues extract info from each node metrics: node cpu usage node memory usage node filesystem usage node network I/O rates container cpu usage container memory usage container network I/O rates
SLIDE 19
EFK
fluentd elasticsearch kibana node1 node2 node3 fluentd fluentd
SLIDE 20 Cluster Architecture
Master EFK Slave Kube-DNS Ingress controller Promethues cluster
public network
Jenkins Master
slave slave
private network
VM deploy
keystone user harbor admin keystone user
SLIDE 21
Cluster initialization
Share one harbor wiht multiple clusters to share public projects with all users Use heat to create nova instance and do configure to run harbor For each cluster: Jenkins master runs as service. Jenkins salve run as pod dynamically to create/delete kube-dns runs as service with three containers running in backend ingress controller runs as rc with one(more) replicas with default-backend-service node-exporter runs as daemon set on each node Promethues runs as serverice fluentd runs as daemon set on each node elasticsearch runs as service kibana runs as service
SLIDE 22
Mapping keystone user to harbor
Dashboard Magnum-api Harbor One keystone project ----> harbor user One keystone project ----> harbor project create user/project keystone user get images push image
SLIDE 23