build production ready container platform based on magnum
play

Build Production Ready Container Platform Based on Magnum and - PowerPoint PPT Presentation

Build Production Ready Container Platform Based on Magnum and Kubernetes Bo Wang bo.wang@easystack.cn HouMing Wang houming.wang@easystack.cn Magnum weakness Features for production ready container platform Private docker


  1. Build Production Ready Container Platform Based on Magnum and Kubernetes Bo Wang bo.wang@easystack.cn 
 HouMing Wang houming.wang@easystack.cn

  2. Magnum weakness Features for production ready container platform Private docker image registry CI/CD Tools Contents Service discovery Monitor and Alarm Log collection and search How to integrate features into Magnum Cluster initialization Mapping keystone user to Harbor

  3. Magnum in M release

  4. Magnum after N release remove container operations, act as container infrastructure management service

  5. Fuctions of Magnum clustertemplate CREATE/DELETE/UPDATE cluster CREATE/DELETE/UPDATE cluster ca SIGN/ROTATE quota CREATE/DELETE/UPDATE it’s really not enough to meet customers’ requirements.

  6. Use Case ---- CI/CD Test Zone push commit deploy trigger push image Production Zone product deploy launch

  7. Private image registry Why private image registry security: proprietary code or confidential information, vulnerability analysis network issue: slow-speed network; Great Firewall internal private cloud: no access to internet Functions of Harbor private/public projects images isolation among projects user authentication

  8. CI/CD tools Why continuous integration and continuous deployment tools Build faster, Test more, Fail less CI/CD tools help to reduce risk and delivery reliable software in short iterations. CI/CD has become one of the most common use cases of Docker early adopters. Functions of Jenkins dynamic generate pod slave pipleline commit trigger timed task lots plugins ...

  9. Internal DNS example: service_A -----> service_B, without dns you must do it in following order create service_A get the clusterIP create service_B with the clusterIP as parameter Why kube-dns cluster ip is dynamic, service name is permanent

  10. Service Discovery access service inside with internal DNS Node 1 Node 2 iptables iptables endpoint_2 clusterIP endpoint_1 service_B kube_dns pod_A_1 pod_B_1 pod_B_2 pod_A_2 clusterIP

  11. Internal DNS Kubernetes DNS pod holds 3 containers: kubedns, dnsmasq and healthz. The kubedns process watches the Kubernetes master for changes in Services and Endpoints, and maintains in-memory lookup structures to service DNS requests. The dnsmasq container adds DNS caching to improve performance. The healthz container provides a single health check endpoint while performing dual healthchecks (for dnsmasq and kubedns). The DNS pod is exposed as a Kubernetes Service with a static IP. Once assigned the kubelet passes DNS configured using the --cluster-dns=10.0.0.10 flag to each container.

  12. Service Discovery access Loadbalancer service outside VIP:port CLoud LB NodePort NodePort NodePort Node 1 Node 2 Node 3 nodeIp:port iptables iptables iptables clusterIP:port podIP:port app1_a app1_b app1_c

  13. Ingress Controller access service outside with ingress controller service_url:port Node 1 Node 2 Node 3 service_url: port Ingress Controller Ingress Controller Ingress Controller podIP:port app1_a app1_b app1_c

  14. Ingress Ingress ressource pointing to that service: Service my-wordpress-svc with 2 pods ingress is kubernetes resource which map url path to service:port

  15. Ingress Controller ingress controller is a reverse proxy, forward url to endpoint Watch the Kubernetes API for any Ingress ressource change Update the configuration of the controller (create/update/delete endpoint, SSL certificate) Reload the configuration Ingress Controller will detect ingress resources, fetch all the endpoints of the service do not occupy ports on nodes support TLS access to service configurable loadbalance strategy

  16. Ingress Controller nginx configuration

  17. Monitor and Alarm node1 Cadvisor node exporter metrics alarm event node2 Cadvisor Promethues altermanager node exporter altera Cadvisor node exporter node3

  18. Monitor and Alarm cadvisor collect containers running info node-exporter collect nodes running info Promethues extract info from each node metrics: node cpu usage node memory usage node filesystem usage node network I/O rates container cpu usage container memory usage container network I/O rates

  19. EFK node1 fluentd node2 elasticsearch kibana fluentd fluentd node3

  20. Cluster Architecture VM cluster deploy private network public network Kube-DNS harbor admin Ingress controller Master Promethues Slave keystone user EFK Jenkins Master slave slave keystone user

  21. Cluster initialization Share one harbor wiht multiple clusters to share public projects with all users Use heat to create nova instance and do configure to run harbor For each cluster: Jenkins master runs as service. Jenkins salve run as pod dynamically to create/delete kube-dns runs as service with three containers running in backend ingress controller runs as rc with one(more) replicas with default-backend-service node-exporter runs as daemon set on each node Promethues runs as serverice fluentd runs as daemon set on each node elasticsearch runs as service kibana runs as service

  22. Mapping keystone user to harbor Dashboard keystone user Harbor Magnum-api One keystone project ----> harbor user create user/project One keystone project ----> harbor project get images push image

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend