Scheduling a Fuller House: Container Management Sharma Podila, - - PowerPoint PPT Presentation

scheduling a fuller house container management
SMART_READER_LITE
LIVE PREVIEW

Scheduling a Fuller House: Container Management Sharma Podila, - - PowerPoint PPT Presentation

Scheduling a Fuller House: Container Management Sharma Podila, Andrew Spyker - Senior Software Engineers About Netflix 81.5M members 2000+ employees (1400 tech) 190+ countries > 100M hours watch per day > NA


slide-1
SLIDE 1

Scheduling a Fuller House: Container Management

Sharma Podila, Andrew Spyker - Senior Software Engineers

slide-2
SLIDE 2

About Netflix

  • 81.5M members
  • 2000+ employees (1400 tech)
  • 190+ countries
  • > 100M hours watch per day
  • > ⅓ NA internet download traffic
  • 500+ Microservices
  • Many 10’s of thousands VM’s
  • 3 regions across the world

2

slide-3
SLIDE 3

Agenda

  • Why containers at Netflix?
  • What did we build and what did we learn?
  • What are our current and future workloads?

3

slide-4
SLIDE 4

Why a 2nd edition of virtualization?

  • Given our resilient cloud native, CI/CD devops enabled,

elastically scalable virtual machine based architecture, did we really need containers?

4

slide-5
SLIDE 5

Motivating factors for containers

  • Simpler management of compute resources
  • Simpler deployment packaging artifacts for compute jobs
  • Need for a consistent local developer environment

5

slide-6
SLIDE 6

Simpler compute, Management & Packaging

Batch/stream processing jobs

  • Here are the files to run my process
  • I need m cores, n disk, and o memory
  • Please just run it for me!

6

Service style jobs (VM’s)

  • Use tested/secure base AMI
  • Bake an AMI
  • Define launch config
  • Choose t-shirt sized instance
  • Canary & red/black ASG’s
slide-7
SLIDE 7

Consistent developer experience

  • Many years focused on

○ Build, bake / cloud deploy / operational experience ○ Not as much time focused on developer experience

  • New Netflix local developer experience based on Docker
  • Has had a benefit in both directions

○ Cloud like local development environment ○ Easier operational debugging of cloud workloads

7

slide-8
SLIDE 8

What about resource optimization?

  • Not absolutely required and easier to get wins at larger

scale across larger virtual machine fleet

  • However, potential benefits to

○ Elastic resource pool for scaling batch & adhoc jobs ○ Reliable smaller instance sizes for NodeJS ○ Cross Netflix resource optimizations ■ Trough usage, instance type migration

8

slide-9
SLIDE 9

Agenda

  • Why containers at Netflix?
  • What did we build and what did we learn?
  • What are our current and future workloads?

9

slide-10
SLIDE 10

VM VM

Lesson: Support containers by leveraging existing Netflix IaaS focused cloud platform

10

Atlas

EC2 AWS AutoScaler VMs App Cloud Platform

(metrics, IPC, health) Eureka

VPC

Edda

Existing - VM’s VM VM

Atlas

EC2 Titus Job Control Containers App Cloud Platform

(metrics, IPC, health) Eureka

VPC

Edda

Titus - Containers VM VM Batch Containers

slide-11
SLIDE 11

VM VM

11

EC2 AWS AutoScaler VMs App Cloud Platform

(metrics, IPC, health)

VPC Netflix Cloud Infrastructure (VM’s + Containers) VM VM

Atlas

Titus Job Control Containers App Cloud Platform

(metrics, IPC, health) Eureka Edda

VM VM Batch Containers

Why - Single consistent cloud platform

slide-12
SLIDE 12

Lesson: Buy vs. Build, Why build our own?

  • Looking across other container management solutions

○ Mesos, Kubernetes, and Swarm

  • Proven solutions are focused on the datacenter
  • Newer solutions are

○ Working to abstract datacenter and cloud ○ Delivering more than cluster manager

■ PaaS, Service discovery, IPC ■ Continuous deployment ■ Metrics

○ Not yet at our level of scale

  • Not appropriate for Netflix

12

slide-13
SLIDE 13

“Project Titus” (Firehose peek)

13

Titus UI Titus UI Docker Registry Docker Registry Rhea container container container docker Titus Agent metrics agent Titus executor logging agent zfs mesos agent docker Rhea Titus API Cassandra Titus Master Job Management & Scheduler S3 Zookeeper Docker Registry EC2 Autocaling API Mesos Master Titus UI Fenzo container Pod & VPC net drivers container container AWS container metadata proxy

Integration

CI/CD Amazon VM’s

slide-14
SLIDE 14

Is that all?

14

slide-15
SLIDE 15

Container Execution

15

Titus UI Titus UI Docker Registry Docker Registry Rhea container container container docker Titus Agent metrics agent Titus executor logging agent zfs mesos agent docker Rhea Titus API Cassandra Titus Master Job Management & Scheduler S3 Zookeeper Docker Registry EC2 Autocaling API Mesos Master Titus UI Fenzo container Pod & VPC net drivers container container AWS container metadata proxy CI/CD Amazon VM’s

slide-16
SLIDE 16

Lesson: What you lose with Docker on EC2

16

+ <

  • Networking: VPC
  • Security: Security Groups, IAM Roles
  • Context: Instance Metadata, User Data / Env Context
  • Operational Visibility: Metrics, Health checking
  • Resource Isolation: Networking, Local Storage

M U L T I

  • T

E N A N T

slide-17
SLIDE 17

Lesson: Making Containers Act Like VM’s

17

  • Built: EC2 Metadata Proxy

○ Provide overridden scheduled IAM role, instance id ○ Proxy other values

  • Provided: Provide Environmental Context

○ Titus specific job and task info ○ ASG app, stack, sequence, other EC2 standard

  • Why? Now:

○ Service discovery registration works ○ Amazon service SDK based applications work

slide-18
SLIDE 18

Lesson: Networking will continue to evolve

18

  • Started with batch

○ Started with “bridge” with port mapping ○ Added “host” with port resource mapping (for performance?) ○ Continue to use “bridge” without port mapping

  • Service style apps added

○ Added “nfvpc” VPC IP/container with libnetwork plugin ○ Removed Host (no value over VPC IP/container) ○ Changed “nfvpc” VPC IP/container ■ Pod based with customer executor (no plugin) ○ Added security groups to “nfvpc”

slide-19
SLIDE 19

Plumbing VPC Networking into Docker

19

No IP Needed Task 0 SecGrp Y Task 1 Task 2 Task 3 docker0 (*) EC2 VM eth0 eni0

SG=Titus Agent

eth1 eni1

SecGrp=X

eth2 eni2

SG=Y IP 1 IP 2 IP 3

pod root veth<id> app SecGrp X pod root veth<id> app SecGrp X pod root veth<id> app app veth<id> Linux Policy Based Routing EC2 Metadata Proxy

169.254.169.254 IPTables NAT (*)

* * *

169.254.169.254

slide-20
SLIDE 20

Lesson: Secure Multi-tenancy is Hard

20

Common to VM’s and tiered security needed

  • Protect the reduced host IAM role, Allow containers to have specific IAM roles
  • Needed to support same security groups in container networking as VM’s

User namespacing

  • Docker 1.10 - Introduced User Namespaces
  • Didn’t work /w shared networking NS
  • Docker 1.11 - Fixed shared networking NS’s
  • But, namespacing is per daemon
  • Not per container, as hoped
  • Waiting on Linux
  • Considering mass chmod / ZFS clones
slide-21
SLIDE 21

Operational Visibility Evolution

21

  • What is “node” - containers on VM’s
  • Soft limits / bursting a good thing?

○ Until percent util and outliers are considered

  • System level metrics

○ Currently - hand coded cgroup scraping ○ Considering Intel Snap replacement

  • Pollers - Metrics, Health, Discovery

○ Created Edda common “server group” view

slide-22
SLIDE 22

Future Execution Focus

22

  • Better Isolation (agents, networking, block I/O, etc.)
  • Exposing our implementation of “Pod”’s to users
  • Better resiliency (DNS dependencies reduced)
slide-23
SLIDE 23

Job Management and Resource Scheduling

23

Titus UI Titus UI Docker Registry Docker Registry Rhea container container container docker Titus Agent metrics agent Titus executor logging agent zfs mesos agent docker Rhea Titus API Cassandra Titus Master Job Management & Scheduler S3 Zookeeper Docker Registry EC2 Autocaling API Mesos Master Titus UI Fenzo container Pod & VPC net drivers container container AWS container metadata proxy CI/CD Amazon VM’s

slide-24
SLIDE 24

Lesson: Complexity in scheduling

24

  • Resilience

○ Balance instances across EC2 zones, instances within a zone

  • Security

○ Two level resource for ENIs

  • Placement optimization

○ Resource affinity ○ Task locality ○ Bin packing (Auto Scaling)

slide-25
SLIDE 25

Lesson: Keep resource scheduling extensible

25

Fenzo - Extensible Scheduling Library Features:

  • Heterogeneous resources & tasks
  • Autoscaling of mesos cluster

○ Multiple instance types

  • Plugins based scheduling objectives

○ Bin packing, etc.

  • Plugins based constraints evaluator

○ Resource affinity, task locality, etc.

  • Scheduling actions visibility

https://github.com/Netflix/Fenzo

slide-26
SLIDE 26

Cluster Autoscaling Challenge

26

Host 4 Host 3 Host 1

vs.

For long running stateful services Host 1 Host 2 Host 2 Host 3 Host 4

slide-27
SLIDE 27

Resources assigned in Titus

27

  • CPU, memory, disk capacity
  • Per container AWS EC2 Security groups, IP, and

network bandwidth via custom driver

  • Abstracting out EC2 instance types
slide-28
SLIDE 28

Security groups and their resources

28

A two level resource per EC2 Instance: N ENIs, each with M IPs

ENI 0 Assigned Security Group: SG1 Used IPs Count: 2 of 7 ENI 1 Assigned Security Group: SG1,SG2 Used IPs Count: 1 of 7 ENI 2 Assigned Security Group: SG3 Used IPs Count: 7 of 7

slide-29
SLIDE 29

Lesson: Scheduling Vs. Job Management

29

Scheduling resources to tasks is common. Lifecycle management is not.

slide-30
SLIDE 30

Lesson: Scheduling Vs. Job Management

30

Task scheduling concerns

  • Assign resources to tasks
  • Cluster wide optimizations

○ Bin packing ○ Global constraints, like SLAs

  • Task preferences and constraints

○ Locality with other tasks ○ Resource affinity

Job manager concerns

  • Managing task/instance counts
  • Creating metadata, defining constraints
  • Lifecycle management

○ Replace failed task executions

  • Handle failures

○ Rate limit requeuing & relaunching ○ Time out tasks in transitionary states

slide-31
SLIDE 31

Future Job Management & Scheduling Focus

31

  • More resources to track: GPUs
  • Automatic resource affinity with heterogenous instances
  • SLAs

○ Latencies for services ○ Throughput for batch ○ Task preemptions

slide-32
SLIDE 32

Things we didn’t cover in this talk

  • Overall integration

○ Chaos, continuous delivery, performance insight

  • Container Execution

○ Logging (live log access & S3 log rotation) ○ Liveness and health checking ○ Isolation (disk usage, networking, block I/O) ○ Image registry (metrics, security scanning)

  • Scheduling

○ Autoscaling heterogeneous pools ○ Host-task fitness criteria

  • API

○ Extensibility, polymorphic, SLA and job/container ownership

32

slide-33
SLIDE 33

Agenda

  • Why containers at Netflix?
  • What did we build and what did we learn?
  • What are our current and future workloads?

33

slide-34
SLIDE 34

Current Titus Production Usage

34

  • Autoscaling

○ 100’s of r3.8xl’s ○ Each 32 vCPU, 244G

  • Peak

○ Thousands of cores ○ Tens of TB’s memory

  • Thousands containers/day

○ ~ 100 different images

slide-35
SLIDE 35

Workloads, Past

  • Most current usage is batch

○ Algorithm training, adhoc reporting jobs

  • Sampling:

○ Training of “sims” and A/B test models ○ Open Connect Device/IX reporting ○ Web security scanning and analysis ○ Social media analytics updates

35

slide-36
SLIDE 36

Workloads, Now

  • Spent last five months adding service style support
  • First line of fire customer requests already received
  • Larger scale shadow and trickle traffic throughout 2Q
  • First service style apps

○ Finer grained instances - NodeJS ○ Docker provided local developer experience

36

slide-37
SLIDE 37

Workloads, Coming

  • Media Encoding

○ Thousands of VM’s ○ VM based resource scheduling ○ Considering containers to have faster start-up ○ Internal spot-market - trough borrowing

  • SPaaS

○ 10’s of thousands of containers ○ Stream Processing as a Service ○ Convert scheduling systems to Titus

37

slide-38
SLIDE 38

Questions?

38

slide-39
SLIDE 39

Other Netflix QCon Talks

39

Title Time Speaker(s)

The Netflix API Platform for Server-Side Scripting Monday 10:35 Katharina Probst Scheduling A Fuller House: Container Mgmt @ Netflix Tuesday 10:35 Andrew Spyker & Sharma Podila Chaos Kong - Endowing Netflix with Antifragility Tuesday 11:50 Luke Kosewski The Evolution of the JavaScript Wednesday 4:10 Jafar Husain Async Programming in JS: The End of the Loop Friday 9:00 Jafar Husain