ç
INCREMENTALISM An Industrial Strategy For Adopting Modern - - PowerPoint PPT Presentation
INCREMENTALISM An Industrial Strategy For Adopting Modern - - PowerPoint PPT Presentation
INCREMENTALISM An Industrial Strategy For Adopting Modern Automation BACKGROUND MONOCULTURE RDBMS ARE RAD TROUBLEMAKER OPERATOR AND ENGINEER BACKGROUND MONOCULTURE RDBMS ARE RAD TROUBLEMAKER OPERATOR AND ENGINEER Industrial Techno
MONOCULTURE TROUBLEMAKER RDBMS ARE RAD OPERATOR AND ENGINEER
BACKGROUND
MONOCULTURE TROUBLEMAKER RDBMS ARE RAD OPERATOR AND ENGINEER
BACKGROUND
Industrial Techno Revolution Development and Operational Practices
- 1. Development Practices
- 2. Secrets Management
- 3. Packaging
- 4. Developer-centric, Self-Healing Applications
- 5. Data Center Aware Services
- 6. Infrastructure Manipulation
YOU ARE A BIT-CHUCKING TECHNO INDUSTRIALIST.
DATA CENTERS ARE YOUR FACTORIES. NETWORKS ARE YOUR ROADS. YOU APP PRODUCES WIDGETS. MICROSERVICES ARE YOUR ROBOTS ON THE FACTORY LINE.
YOU ARE A BIT-CHUCKING TECHNO INDUSTRIALIST. WIDGET SHIPPING ^
REDUNDANCY
CRITICAL INFRASTRUCTURE
PROFILE MESSAGES UPSELL CART RELATED COMMENTS SOCIAL
HTML/JSON/gRPC Response
Response Microservices
HTML/JSON/gRPC Response MY APP
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
BUSINESS VALUE
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
INPUT
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
DEPENDENCY
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
PROPERTIES OF THE APPLICATION
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
VALUE
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
DISTRACTING, REQUIRED, NECESSARY COMPLEXITY
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
WIDGET
- New program!
- Use a piece of user data
- Talks to a database
- Returns something useful
- Highly-Available
- Self-Healing
- Feedback
INDUSTRIAL COMPUTING
RAW MATERIAL REDUNDANT ROBOTS REPLACEABLE PARTS QUALITY-MANAGEMENT WIDGET STORAGE
MY RESULT MY APP
Codified Development Environment Reproducible Dev Environments Disposable R&D Workspace Shared Developer Workspace Developer-driven Infrastructure
Terminal
$ $EDITOR Vagrantfile Vagrant.configure("2") do |config| config.vm.box = "ubuntu/xenial64" config.vm.box_url = "https://cloud-images.ubuntu.com/ xenial/current/xenial-server-cloudimg-amd64-vagrant.box" config.vm.network "private_network", ip: "192.168.33.10" config.vm.provision "shell", path: "setup.sh" end $ cat setup.sh apt-get install postgresql-server tmux
Terminal
my-laptop$ vagrant up --destroy-on-error my-laptop$ vagrant ssh vm$ uname -a | tee /vagrant/uname.out Linux compton 2.6.24-19-server #1 SMP Sat Jul 12 00:40:01 UTC 2008 i686 GNU/Linux vm$ logout Shared connection to 192.168.39.130 closed. my-laptop$ cat uname.out Linux compton 2.6.24-19-server #1 SMP Sat Jul 12 00:40:01 UTC 2008 i686 GNU/Linux
Terminal
$ $EDITOR Vagrantfile Vagrant.configure("2") do |config| config.ssh.shell = "sh" config.vm.synced_folder ".", "/vagrant", nfs: true, id: "vagrant-root" end
Terminal
my-laptop$ $EDITOR myapp.go my-laptop$ GOOS=linux GOARCH=amd64 go build -o myapp-linux my-laptop$ GOOS=freebsd GOARCH=amd64 go build -o myapp-freebsd my-laptop$ vagrant ssh vm$ /vagrant/myapp-linux vm$ logout my-laptop$ vagrant up freebsd-vm1 my-laptop$ vagrant ssh freebsd-vm1 freebsd-vm1$ /vagrant/myapp-freebsd freebsd-vm1$ logout
$HOME/go/src/github.com/hashicorp/myapp/Vagrantfile
Secret Sprawl Break Glass Procedures Audit Logs Secrets Lifecycle Management
"I want to deploy this app to prod. Password-less logins are disabled on the databases! Now what?"
"I want to deploy this app to prod. Password-less logins are disabled on the databases! Now what?"
Private GitHub repo Commit passwords inline in SCM Switch creds based on $HOSTNAME? Establish a protocol for acquiring credentials at runtime
Terminal
$ vault read postgresql/creds/readonly Key Value
- lease_id
postgresql/creds/readonly/5fec46f2-ab40-d9b8-61a2-887c7946eeb6 lease_duration 1h0m0s lease_renewable true password f8a93086-b11d-10cd-8795-f537a10de712 username token-9e57c18f-ac99-8e29-48f2-3fb09066d2b4
Terminal
$ VAULT_ADDR=http://vault.service.consul vault read postgresql/creds/readonly Key Value
- lease_id
postgresql/creds/readonly/5fec46f2-ab40-d9b8-61a2-887c7946eeb6 lease_duration 1h0m0s lease_renewable true password f8a93086-b11d-10cd-8795-f537a10de712 username token-9e57c18f-ac99-8e29-48f2-3fb09066d2b4
Terminal
$ psql -U postgres psql (9.6.1) Type "help" for help. postgres=# \du Role name | Attributes | Member of
- ------------------------------------------+--------------------------+------------
postgres | Superuser, Create ... | {} token-9e57c18f-ac99-8e29-48f2-3fb09066d2b4 | Password valid until ... | {}
Terminal
$ vault renew postgresql/creds/readonly/5fec46f2-ab40-d9b8-61a2-887c7946eeb6 Key Value
- lease_id
postgresql/creds/readonly/5fec46f2-ab40-d9b8-61a2-887c7946eeb6 lease_duration 1h0m0s lease_renewable true
Terminal
$ cat myapp-db-config.yml.ctmpl
- {{- with secret "postgresql/creds/readonly" }}
username: "{{ .Data.username }}" password: "{{ .Data.password }}" database: "myapp" {{- end }} $ consul-template -template="myapp-db-config.yml.ctmpl:myapp-db-config.yml" ./myapp <CTRL+C> Received interrupt, cleaning up...
Terminal
$ vault write postgresql/roles/readonly \ sql="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" Success! Data written to: postgresql/roles/readonly
Terminal
$ tee my-policy.vault | vault write postgresql/roles/readonly sql=- CREATE ROLE "{{name}}" WITH LOGIN PASSWORD '{{password}}'; GRANT SELECT ON ALL TABLES IN SCHEMA public TO "{{name}}"; Success! Data written to: postgresql/roles/readonly
v2.0.0
Codified Build Environment Reproducible Packaging Shared Packaging Instructions Developer-driven Build and Packaging Steps
compile.json 1/3
{ "builders": [{ "name": "myapp", "type": "docker", "image": "centos:6", "commit": true, "privileged": true }], "provisioners": [ { "type": "file", "source": "myapp-linux", "destination": "/usr/local/bin/myapp" }, { "type": "file", "source": "local-config-file.repo", "destination": "/usr/local/etc/myapp.conf" }, { "type": "file", "source": "start_myapp.sh", "destination": "/sbin/start_myapp" },
compile.json 2/3
{ "type": "shell", "inline": [ "/usr/bin/yum -y update", "/usr/bin/yum -y install util-linux-ng patch", "/bin/chmod 0600 /usr/local/etc/myapp.conf", "/bin/chmod 0744 /sbin/start_myapp /usr/local/bin/myapp-linux", "/usr/bin/curl -o /usr/local/etc/some-ca.crt https://host.example.com/pki/ ca.crt", ] } ], "post-processors": [ [ { "type": "docker-tag", "repository": "myorg/myapp" }, { "type": "docker-save", "path": "myapp.tar" },
compile.json 3/3
{ "type": "artifice", "files": ["myapp.tar"] }, { "type": "compress", "output": "myapp.tar.gz", "compression_level": 9 }, { "type": "atlas", "artifact": "myorg/myapp", "artifact_type": "archive", "metadata": { "created_at": "{{ timestamp }}" } } ] ] }
compile.json 1/3
{ "builders": [{ "name": "myapp", "type": "docker", "image": "centos:6", "commit": true, "privileged": true }], "provisioners": [ {
"type": "shell", "scripts": [ "setup.sh", "prod-app-script.sh" ] }
v2.0.0
v1.1.2 v2.0.0
What's a Cluster Scheduler?
redis.job
job "redis" { datacenters = ["asia-east1", "asia-northeast1"] task "redis" { driver = "docker" config { image = "redis:latest" } resources { cpu = 500 # Mhz memory = 256 # MB network { mbits = 10 port "redis" {} } } } }
redis-service.job
job "redis" { datacenters = ["asia-east1", "asia-northeast1"] task "redis" { service { name = "redis" # redis.service.consul port = "redis" check { type = "tcp" interval = "30s" timeout = "2s" } } ...
resources { network { mbits = 10 port "redis" { static = "6379" } } }
Declare what you want to run
Scheduler determines where and manages how to run
v1.1.2
QTY: 1
Developer-centric Release Management Reproducible Runtime Environments Provider Agnostic Runtime Native Hybrid-Cloud Consumption Model Self-Healing Infrastructure * Service Discovery and Secure Introduction Support
myapp.job
job "myapp" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" group "myapp" { count = 1 task "api" { driver = "docker" artifact { source = "https://s3.amazonaws.com/myorg/myapp.tar.gz"
- ptions {
archive = "tar.gz" } } ...
myapp.job
config { load = ["myapp.tar"] image = "myorg/myapp" command = "/sbin/start_myapp" args = [ "-mode=api" ] network_mode = "host" pid_mode = "host" } service { name = "${TASKGROUP}" # myapp.service.consul tags = [ "api" ] # api.myapp.service.consul port = "api" check { type = "http" path = "/health.txt" interval = "5s" timeout = "2s" } } }
myapp.job
task "web" { driver = "docker" config { load = ["myapp.tar"] image = "myorg/myapp" command = "/sbin/start_myapp" args = [ "-mode=web" ] network_mode = "host" pid_mode = "host" } service { name = "${TASKGROUP}" # myapp.service.consul tags = [ "web" ] # web.myapp.service.consul port = "web" check { type = "http" path = "/health.txt" interval = "5s" timeout = "2s" } }
myapp.job
$ nomad plan myapp.job $ nomad run -check-index 12515398 myapp.job
v1.1.2 v2.0.0
QTY: 0 QTY: 20
myapp-green.job
job "myapp-green" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" group "myapp" { count = 19 task "api" { driver = "docker" artifact { source = "https://s3.amazonaws.com/myorg/myapp-v123.tar.gz"
- ptions {
archive = "tar.gz" } } ...
myapp-blue.job
job "myapp-blue" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" group "myapp" { count = 1 task "api" { driver = "docker" artifact { source = "https://s3.amazonaws.com/myorg/myapp-v124.tar.gz"
- ptions {
archive = "tar.gz" } } ...
myapp-blue.job
$ nomad plan myapp-blue.job + Job: "myapp-blue" + Task Group: "myapp" (1 create) + Task: "api" (forces create) Scheduler dry-run:
- All tasks successfully allocated.
100% Green 95% Green 70% Green 90% Blue 100% Blue
green: myapp-v123 blue: myapp-v124
v1.1.2 v2.0.0
QTY: 0 QTY: 20
myapp-green.job
job "myapp-green" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" group "myapp" { count = 0 task "api" { driver = "docker" artifact { source = "https://s3.amazonaws.com/myorg/myapp-v123.tar.gz"
- ptions {
archive = "tar.gz" } } ...
myapp-blue.job
job "myapp-blue" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" group "myapp" { count = 20 task "api" { driver = "docker" artifact { source = "https://s3.amazonaws.com/myorg/myapp-v124.tar.gz"
- ptions {
archive = "tar.gz" } } ...
myapp-blue.job
job "myapp-blue" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" update { # Stagger updates every 120 seconds stagger = "120s" # Update a single task at a time max_parallel = 1 } ...
myapp-blue.job
$ nomad status myapp-blue ID = myapp-blue Name = myapp-blue Type = service Priority = 50 Datacenters = asia-east1 Status = running Periodic = false Summary Task Group Queued Starting Running Failed Complete Lost myapp 0 0 1 0 0 0 Allocations ID Eval ID Node ID Task Group Desired Status Created At 24cfd201 81efc2fa 8d0331e9 myapp run running 11/11/16 21:03:19 AEDT
myapp-blue.job
$ nomad alloc-status --verbose a7365fe4 ID = a7365fe4-cb28-a6e9-f3d4-f99e49c89776 Eval ID = c3c9a1db-dbeb-8afa-0a83-4f1b8b5a03f5 Name = myapp-blue.myapp[0] Node ID = 1f029d38-8d4b-a552-261f-e457b60f9b4b Job ID = myapp-blue Client Status = running Created At = 11/11/16 22:04:53 AEDT Evaluated Nodes = 1 Filtered Nodes = 0 Exhausted Nodes = 0 Allocation Time = 1.085001ms Failures = 0 ==> Task Resources Task: "api" CPU Memory MB Disk MB IOPS Addresses 500 256 300 0 db: 127.0.0.1:38537 Task: "web" CPU Memory MB Disk MB IOPS Addresses
Terminal
$ vault read postgresql/creds/readonly Key Value
- lease_id
postgresql/creds/readonly/5fec46f2-ab40-d9b8-61a2-887c7946eeb6 lease_duration 1h0m0s lease_renewable true password f8a93086-b11d-10cd-8795-f537a10de712 username token-9e57c18f-ac99-8e29-48f2-3fb09066d2b4
Terminal
$ env VAULT_TOKEN=.... vault read postgresql/creds/readonly Key Value
- lease_id
postgresql/creds/readonly/5fec46f2-ab40-d9b8-61a2-887c7946eeb6 lease_duration 1h0m0s lease_renewable true password f8a93086-b11d-10cd-8795-f537a10de712 username token-9e57c18f-ac99-8e29-48f2-3fb09066d2b4
myapp.job
job "myapp" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" group "myapp" { count = 1 task "api" { driver = "docker" env { VAULT_TOKEN = "7ea47d76-a653-4d43-9507-dbeed3b3747f" } artifact { source = "https://s3.amazonaws.com/myorg/myapp.tar.gz"
- ptions {
archive = "tar.gz" } }
myapp.job
job "myapp" { region = "apac" datacenters = ["asia-east1", "asia-northeast1"] type = "service" group "myapp" { count = 1 task "api" { driver = "docker" vault { policies = ["myapp", "api"] change_mode = "signal" change_signal = "SIGUSR1" } artifact { source = "https://s3.amazonaws.com/myorg/myapp.tar.gz"
- ptions {
archive = "tar.gz" }
Containerized Virtualized Standalone
Docker Windows Server Containers Qemu / KVM Hyper-V Xen Java Jar Static Binaries C# Rkt
Key Value Store HTTP API Host & Service Level Health Checks Datacenter Aware Service Discovery HTTP + DNS
CLIENT CLIENT CLIENT CLIENT CLIENT CLIENT SERVER SERVER SERVER
REPLICATION REPLICATION RPC RPC L A N G O S S I P
CLIENT CLIENT CLIENT CLIENT CLIENT CLIENT SERVER SERVER SERVER
REPLICATION REPLICATION RPC RPC L A N G O S S I P
SERVER SERVER SERVER
REPLICATION REPLICATION W A N G O S S I P
DB 1 DB 2 DB N
HEALTH CHECKING SERVICE
"Are you healthy?" "What about you?" "Yessir!" "Nah"
DB 1 DB 2 DB N
HEALTH CHECKING SERVICE
1,000'S OF REQUESTS
CONSUL
DB 1 DB 2 DB N
My status has changed
CONSUL
DB 1 DB 2 DB N
10'S OF REQUESTS
v1.1.2
QTY: 0 QTY: 20
v2.0.0
v1.1.2
QTY: 0 QTY: 20
v2.0.0
Terminal
$ terraform plan -var-file=yow2016.tfvars -out=yow2016.tfplan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. Your plan was also saved to the path below. Call the "apply" subcommand with this plan file and Terraform will exactly execute this execution plan. Path: yow2016.tfplan + consul_key_prefix.myservice_config datacenter: "<computed>" path_prefix: "myservice/mycomponent/" subkeys.%: "3" subkeys.appParam1: "val1" subkeys.appParam2: "var2" subkeys.dbHostname: "my-db.service.consul" Plan: 1 to add, 0 to change, 0 to destroy.
Terminal
$ cat myservice-consul-kv-config.tf variable "path_prefix" { default = "myservice/mycomponent" } resource "consul_key_prefix" "myservice_config" { path_prefix = "${var.path_prefix}/" subkeys = { "appParam1" = "val1" "appParam2" = "var2" "dbHostname" = "my-db.service.consul" } }
Terminal
$ git diff myservice-consul-kv-config.tf diff --git a/myservice-consul-kv-config.tf b/myservice-consul-kv-config.tf index 76533d8..990d595 100644
- -- a/myservice-consul-kv-config.tf
+++ b/myservice-consul-kv-config.tf @@ -5,7 +5,5 @@ resource "consul_key_prefix" "myservice_config" { "appParam1" = "val1" "appParam2" = "var2" "dbHostname" = "my-db.service.consul" + "dbHostnameFollower" = "slave.my-db.query.consul" + "dbHostnameLeader" = "master.my-db.query.consul" } }
Terminal
$ terraform plan -var-file=yow2016.tfvars -out=yow2016.tfplan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. consul_key_prefix.myservice_config: Refreshing state... (ID: myservice/mycomponent/) Your plan was also saved to the path below. Call the "apply" subcommand with this plan file and Terraform will exactly execute this execution plan. Path: yow2016.tfplan ~ consul_key_prefix.myservice_config subkeys.%: "3" => "5" subkeys.dbHostnameFollower: "" => "slave.my-db.query.consul" subkeys.dbHostnameLeader: "" => "master.my-db.query.consul" Plan: 0 to add, 1 to change, 0 to destroy.
Terminal
$ terraform apply yow2016.tfplan consul_key_prefix.myservice_config: Modifying... subkeys.%: "3" => "5" subkeys.dbHostnameFollower: "" => "slave.my-db.query.consul" subkeys.dbHostnameLeader: "" => "master.my-db.query.consul" consul_key_prefix.myservice_config: Modifications complete Apply complete! Resources: 0 added, 1 changed, 0 destroyed. The state of your infrastructure has been saved to the path
- below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state use the `terraform show` command. State path: terraform.tfstate
Terminal
$ terraform fmt cat myservice-consul-kv-config.tf cat myservice-consul-kv-config.tf $ cat cat myservice-consul-kv-config.tf resource "consul_key_prefix" "myservice_config" { path_prefix = "${var.path_prefix}/" subkeys = { "appParam1" = "val1" "appParam2" = "var2" "dbHostname" = "my-db.service.consul" "dbHostnameFollower" = "slave.my-db.query.consul" "dbHostnameLeader" = "master.my-db.query.consul" }
Terminal
$ cat myservice-consul-kv-config.tf variable "path_prefix" { default = "myservice/mycomponent" } variable "service_db_name" {} resource "consul_key_prefix" "myservice_config" { path_prefix = "${var.path_prefix}/" subkeys = { "appParam1" = "val1" "appParam2" = "var2" "dbHostname" = "${var.service_db_name}.service.consul" "dbHostnameFollower" = "slave.${var.service_db_name}.query.consul" "dbHostnameLeader" = "master.${var.service_db_name}.query.consul" } }
Terminal
$ cat yow2016.tfvars "address" = "127.0.0.1:8500" "datacenter" = "yow2016" "token" = "" "service_db_name" = "my-db"
Terminal
$ terraform plan -var-file=yow2016.tfvars -out=yow2016.tfplan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. consul_key_prefix.myservice_config: Refreshing state... (ID: myservice/mycomponent/) No changes. Infrastructure is up-to-date. This means that Terraform could not detect any differences between your configuration and the real physical resources that exist. As a result, Terraform doesn't need to do anything.
Terminal
$ cat myservice-consul-kv-config.tf variable "db_leader_tag" { default = "leader" } variable "db_follower_tag" { default = "follower" } # snip resource "consul_key_prefix" "myservice_config" { path_prefix = "${var.path_prefix}/" subkeys = { "appParam1" = "val1" "appParam2" = "var2" "dbHostname" = "${var.service_db_name}.service.consul" "dbHostnameFollower" = "${var.db_follower_tag}.${var.service_db_name}.query.consul" "dbHostnameLeader" = "${var.db_leader_tag}.${var.service_db_name}.query.consul" } }
Terminal
% git diff diff --git a/myservice-consul-kv-config.tf b/myservice-consul-kv-config.tf index 76590ef..e22eff0 100644
- -- a/myservice-consul-kv-config.tf
+++ b/myservice-consul-kv-config.tf @@ -1,9 +1,9 @@ variable "db_leader_tag" {
- default = "leader"
+ default = "rw" } variable "db_follower_tag" {
- default = "follower"
+ default = "ro" } variable "path_prefix" {
Terminal
$ terraform plan -var-file=yow2016.tfvars -out=yow2016.tfplan Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. consul_key_prefix.myservice_config: Refreshing state... (ID: myservice/mycomponent/) Your plan was also saved to the path below. Call the "apply" subcommand with this plan file and Terraform will exactly execute this execution plan. Path: yow2016.tfplan ~ consul_key_prefix.myservice_config subkeys.dbHostnameFollower: "follower.my-db.query.consul" => "ro.my-db.query.consul" subkeys.dbHostnameLeader: "leader.my-db.query.consul" => "rw.my-db.query.consul" Plan: 0 to add, 1 to change, 0 to destroy. $ git reset --hard HEAD is now at 2ab88f9 Revise terminology from master/slave to leader/follower
- 1. Codify Everything
- 2. Pre-Plan outcomes at build-time
- 3. Create reproducible artifacts
- 4. Idempotent APIs and Tooling
- 5. Developer-Centric Operations
- 6. Make small, well understood changes changes
- 7. Start where it makes sense for your organization
CONTACT INFO
@SeanChittenden sean@hashicorp.com THANK YOU! Questions?
IMBUED TRUST
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
SMALL SUCCESS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
VAULT
WIDE EYES
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
VAULT
HA BACKEND
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
VAULT
ROBUST VAULT!
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
VAULT
HA CONSUL + HA VAULT
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
VAULT
SECRETS AT LAST
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
SECRETS VAULT
AUTOMATION
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
NOMAD SECRETS VAULT
FULL STACK VALUE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
BUSINESS OBJECTIVE MYAPP NOMAD SECRETS VAULT
RISK MITIGATED
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
BUSINESS OBJECTIVE MYAPP NOMAD SECRETS VAULT
SECRETS CONSUL CLUSTER APPLICATION CONSUL CLUSTER
SELF-HEALING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
TRIAGE DIAGNOSE TREAT PREVENT
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
FOUNDATION
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
FOUNDATION PLATFORM
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
FOUNDATION PLATFORM APP
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
FOUNDATION PLATFORM APP "Easy"
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
FOUNDATION PLATFORM APP "Easy" Tough
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
FOUNDATION PLATFORM APP "Easy" Hard Tough
SELF-ASSEMBLING SYSTEMS
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
FOUNDATION PLATFORM APP
ERROR BUDGET
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
ERROR BUDGET
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
Big Budget Less Important Smaller Budget More Important
PLAN FOR KNOWN FAILURE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
PLAN FOR KNOWN FAILURE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
KNOWN KNOWNS PLAN FOR KNOWN FAILURE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
KNOWN UNKNOWN PLAN FOR KNOWN FAILURE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
PLAN FOR KNOWN FAILURE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
UNKNOWN UNKNOWN PLAN FOR KNOWN FAILURE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
PLAN FOR KNOWN FAILURE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
PLAN FOR KNOWN FAILURE
RISK MANAGEMENT
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
RISK MANAGEMENT
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
Insiders OpenSSL Application Vulnerabilities
RISK MANAGEMENT
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
AUTOMATION
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
AUTOMATION
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
GOOD BAD UGLY
EMBRACE AUTOMATION
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2
Creative Industrious Lazy Mental Drift
- Self-Healing
- Self-Assembly
- Error Budgeting
- Failure Planning
- Risk Management
- Automation
THINGS WE EMBRACE
INCREMENTALISM LIFE CYCLE CODIFY EXAMPLE 1 TENETS EXAMPLE 2