Octavia
OpenStack Load Balancing New Features Deep Dive OpenStack Summit - Denver
May 2019
Octavia OpenStack Load Balancing New Features Deep Dive OpenStack - - PowerPoint PPT Presentation
May 2019 Octavia OpenStack Load Balancing New Features Deep Dive OpenStack Summit - Denver Adam Harwell - Train PTL - Verizon Media Carlos Goncalves - Red Hat Michael Johnson - Red Hat What is Octavia? Network Load Balancing as a Service for
May 2019
Network Load Balancing as a Service for OpenStack.
access to network load balancer services, in a technology agnostic manner, for OpenStack.
available load balancer that scales with your compute environment.
OpenStack project during the Ocata series.
looking forward to using” for previous OpenStack user surveys.
Backup members, sometimes called “sorry servers”, are pool members that are only used when all of the non-backup members of a pool are down. Instead of users getting an HTTP 503 error, since there are no member servers available in the pool, they will get served content from the backup member servers. These servers will typically have static content saying the “Site is down for maintenance”. These servers may not even be running in the same cloud.
User configurable timeouts were a highly requested feature. In Rocky we added:
Usage examples: long-lived connections, performance optimization
Provider drivers allow users to select alternate backend load balancing engine. Octavia comes with a reference driver, the amphora driver, but operators can load additional drivers or even replace the reference driver.
Service (DDS).
unreachable”, the member is considered down. If no “ICMP unreachable” is received, the member is considered up.
responses.
Flavors allow administrators to define “flavors” of load balancers that users can select from at load balancer creation. Each provider driver exposes a set of “capabilities” that administrators can configure in flavors.
Administrators can build flavor profiles with the desired provider capabilities settings. By default, flavor profiles are only visible to administrators. Usage example: abstract users from providers, offer different SLAs
Finally the administrator creates the user visible flavor.
TERMINATED_HTTPS listeners can now be configured for TLS client authentication. When an HTTPS connection is requested on the VIP, the load balancer can request a client certificate and validate it against a Certificate Authority (CA) certificate and Certificate Revocation List (CRL). There are now the following new headers the load balancer can insert into the HTTP flow when TLS client authentication is enabled:
We have also added new L7 rules for TLS client authentication:
Pools can now be configured to establish TLS connections to member servers. The TLS certificate presented by the member server can optionally be validated against a Certificate Authority (CA) and Certificate Revocation List (CRL). Users can also, optionally, provide a certificate that the load balancer will present to the member servers for TLS client authentication. All of the TLS certificates and CRLs are stored in a Castellan-compatible key store such as OpenStack Barbican.
Object tags are arbitrary strings that can be associated with the load balancer objects. These tags can then be used to filter results returned by the API. Octavia supports the following query filter types: For example, if you would like to get the list of load balancers with both the “red” and “blue” tags you would request: GET /v2/lbaas/loadbalancers?tags=red,blue To get a list of load balancers that have the “red” or “blue” tag, you would request: GET /v2/lbaas/loadbalancers?tags-any=red,blue Usage example: find all resources with certain tag and run actions, track resources created by Heat.
The new L7 Policy action REDIRECT_PREFIX allows users to redirect requests to an alternate protocol and/or host while keeping the the URL path intact. For example, you might want to redirect users to a specific secure webserver: http://www.octavia.cloud/octavia/latest Can be redirected to: https://docs.openstack.org/octavia/latest
Past
@OpenStack
OpenStackFoundation