docker networking workshop agenda
play

Docker Networking Workshop Agenda 1. Detailed Overview 2. Docker - PowerPoint PPT Presentation

Docker Networking Workshop Agenda 1. Detailed Overview 2. Docker Networking Evolution 3. Use Cases Single-host networking with the Bridge driver Multi-host networking with the Overlay driver Connecting to existing VLANs with the


  1. Docker Networking Workshop

  2. Agenda 1. Detailed Overview 2. Docker Networking Evolution 3. Use Cases − Single-host networking with the Bridge driver − Multi-host networking with the Overlay driver − Connecting to existing VLANs with the MACVLAN driver 4. Service Discovery 5. Routing Mesh 6. HTTP Routing Mesh (HRM) with Docker Datacenter 7. Docker Network Troubleshooting 8. Hands-on Exercises 2

  3. Detailed Overview BACKGROUND, CONTAINER NETWORK MODEL (CNM), LIBNETWORK, DRIVERS… 3

  4. Background: Networking is Important! Networking is integral to distributed applications But networking is hard, vast, and complex! Docker networking goal: MAKE NETWORKING SIMPLE! 4

  5. 5

  6. Docker Networking Goals Make networks first Make applications Make multi-host class citizens in a more portable networking simple Docker environment Make networks Create a pluggable Support multiple OS secure and scalable network stack platforms 6

  7. Docker Networking Design Philosophy Put Users First Plugin API Design Developers and Batteries included Operations but removable 7

  8. Container Network Model (CNM) Sandbox Endpoint Network 8

  9. Containers and the CNM Container Endpoint Sandbox Network Container C1 Container C2 Container C3 Network A Network B 9

  10. What is Libnetwork? Libnetwork is Docker’s native implementation of the CNM CNM Libnetwork 10

  11. What is Libnetwork? Docker’s native implementation of the CNM Provides built-in service discovery and load balancing Library containing everything needed to create and manage container networks Provides a consistent versioned API Gives containers direct access to the underlay network without Pluggable model (native and port mapping and without a remote/3rd party drivers) Linux bridge Multi-platform, written in Go, open source 11

  12. Libnetwork and Drivers Libnetwork has a Drivers are used to implement pluggable driver interface different networking technologies Built-in drivers are called 3rd party drivers are called local drivers, and include: remote drivers, and include: Calico, Contiv, Kuryr , Weave… bridge, host, overlay, MACVLAN Libnetwork also supports pluggable IPAM drivers 12

  13. Show Registered Drivers $ docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 2 <snip> Plugins: Volume: local Network: null bridge host overlay ... 13

  14. Libnetwork Architecture Libnetwork (CNM) Native Network Driver Load Balancing Native IPAM Driver Service Discovery Remote Network Driver Network Control Plane Remote IPAM Driver Docker Engine 14

  15. Networks and Containers Defer to Driver docker network create –d <driver> … Libnetwork Driver Driver Engine docker run -- network … 15

  16. Key Advantages Pluggable Decentralized Flexibility Highly-Available Docker Native Out-of-the-Box UX and API Support with User Friendly Docker Datacenter Distributed Scalability + Cross-platform Performance 16

  17. Detailed Overview: Summary • The CNM is an open-source container networking specification contributed to the community by Docker, Inc. • The CNM defines sandboxes, endpoints, and networks • Libnetwork is Docker’s implementation of the CNM • Libnetwork is extensible via pluggable drivers • Drivers allow Libnetwork to support many network technologies • Libnetwork is cross-platform and open-source The CNM and Libnetwork simplify container networking and improve application portability 17

  18. Q & A 18

  19. Break 19

  20. Docker Networking Evolution 20

  21. • Multihost Networking • Plugins • Aliases • IPAM • DNS Round Libnetwork • Network UX/API Robin LB (CNM) 1.7 1.8 1.9 1.10 1.11 1.12 • Secure out-of-the-box Service Distributed • Distributed KV store Discovery DNS • Load balancing • Swarm integration • Built-in routing mesh • Native multi-host overlay • … 21

  22. Docker Networking on Linux • The Linux kernel has extensive networking capabilities (TCP/IP stack, VXLAN, DNS…) • Docker networking utilizes many Linux kernel networking features (network namespaces, bridges, iptables, veth pairs…) • Linux bridges: L2 virtual switches implemented in the kernel • Network namespaces: Used for isolating container network stacks • veth pairs: Connect containers to container networks • iptables : Used for port mapping, load balancing, network isolation… 22

  23. Use Cases SINGLE-HOST NETWORKING WITH THE BRIDGE DRIVER 23

  24. What is Docker Bridge Networking? Docker host Single-host networking! • Simple to configure and troubleshoot Cntnr1 Cntnr2 Cntnr1 • Useful for basic test and dev Bridge 24

  25. What is Docker Bridge Networking? • The bridge driver creates a bridge (virtual Docker host switch) on a single Docker host Cntnr1 Cntnr2 Cntnr1 • Containers get plumbed into this bridge • All containers on this bridge can communicate • The bridge is a private network restricted Bridge to a single Docker host 25

  26. What is Docker Bridge Networking? Docker host 1 Docker host 2 Docker host 3 CntnrA CntnrB CntnrC CntnrD CntnrE CntnrF Bridge Bridge Bridge 1 Bridge 2 Containers on different bridge networks cannot communicate 26

  27. Use of the Term “Bridge” • The bridge driver creates simple Linux bridges. • All Docker hosts have a pre- built network called “bridge” − This was created by the bridge driver − This is the default network that all new containers will be connected to (unless you specify a different network when the container is created) • You can create additional user-defined bridge networks "Name": "bridge", "Id": "2497474b…7f2b4", "Scope": "local", Create "Driver": "bridge", network Bridge Linux Network attributes bridge driver 27

  28. Bridge Networking in a Bit More Detail Docker host • The bridge created by the bridge driver for the pre-built bridge network is called Cntnr1 Cntnr2 Cntnr1 docker0 • Each container is connected to a bridge network via a veth pair veth veth veth • Provides single-host networking • External access requires port mapping Bridge eth0 28

  29. Docker Bridge Networking and Port Mapping Docker host 1 Cntnr1 Host port Container port :80 10.0.0.8 $ docker run -p 8080:80 ... Bridge :8080 172.14.3.55 L2/L3 physical network 29

  30. Bridge Networking Summary • Creates a private internal network Docker host (single-host) • External access is via port mappings Cntnr1 Cntnr2 Cntnr1 on a host interface • There is a default bridge network called bridge veth veth veth • Can create user-defined bridge networks Bridge “docker0” eth0 30

  31. Demo BRIDGE 31

  32. Q & A 32

  33. Use Cases MULTI-HOST NETWORKING WITH THE OVERLAY DRIVER (IN SWARM MODE) 33

  34. What is Docker Overlay Networking? The overlay driver enables simple and secure multi-host networking Docker host 1 Docker host 2 Docker host 3 CntnrA CntnrB CntnrC CntnrD CntnrE CntnrF Overlay Network All containers on the overlay network can communicate! 34

  35. Building an Overlay Network (High level) Docker host 1 Docker host 2 10.0.0.3 10.0.0.4 Overlay 10.0.0.0/24 172.31.1.5 192.168.1.25 35

  36. Docker Overlay Networks and VXLAN Docker host 1 Docker host 2 • The overlay driver uses VXLAN technology to build the network • A VXLAN tunnel is created through the underlay network(s) VTEP VXLAN Tunnel VTEP • At each end of the tunnel is a VXLAN tunnel end point ( VTEP ) 172.31.1.5 192.168.1.25 • The VTEP performs encapsulation and de-encapsulation • The VTEP exists in the Docker Host’s network namespace Layer 3 transport (underlay networks) 36

  37. Building an Overlay Network (more detailed) Docker host 1 Docker host 2 veth veth C1: 10.0.0.3 C2: 10.0.0.4 Br0 Br0 VTEP VTEP Network Network VXLAN Tunnel :4789/udp :4789/udp namespace namespace 172.31.1.5 192.168.1.25 Layer 3 transport (underlay networks) 37

  38. Overlay Networking Ports Docker host 1 Docker host 2 10.0.0.3 10.0.0.4 Management Plane (TCP 2377) - Cluster control Data Plane (UDP 4789) - Application traffic (VXLAN) Control Plane (TCP/UDP 7946) - Network control 172.31.1.5 192.168.1.25 38

  39. Overlay Networking Under the Hood • Virtual eXtensible LAN (VXLAN) is the data transport (RFC7348) • Creates a new L2 network over an L3 transport network • Point-to-Multi-Point tunnels • VXLAN Network ID ( VNID ) is used to map frames to VLANs • Uses Proxy ARP • Invisible to the container • The docker_gwbridge virtual switch per host for default route • Leverages the distributed KV store created by Swarm • Control plane is encrypted by default • Date plane can be encrypted if desired 39

  40. Demo OVERLAY 40

  41. Q & A 41

  42. Break 42

  43. Use Cases CONNECTING TO EXISTING VLANS WITH THE MACVLAN DRIVER 43

  44. What is MACVLAN? • A way to attach containers to existing networks and VLANs Docker host 1 • Good for mixing containers with Cntnr1 Cntnr2 VMs and physical machines V P • Ideal for apps that are not ready 10.0.0.8 10.0.0.9 to be fully containerized 10.0.0.68 10.0.0.25 • Uses the well known MACVLAN eth0: Linux network type 10.0.0.40 • Nothing to do with Mac OS! L2/L3 physical underlay (10.0.0.0/24) 44

Download Presentation
Download Policy: The content available on the website is offered to you 'AS IS' for your personal information and use only. It cannot be commercialized, licensed, or distributed on other websites without prior consent from the author. To download a presentation, simply click this link. If you encounter any difficulties during the download process, it's possible that the publisher has removed the file from their server.

Recommend


More recommend