CloudKitty Hands-on
1 / 56
CloudKitty Hands-on 1 / 56 Lets meet your hosts! 2 / 56 Lets meet - - PowerPoint PPT Presentation
CloudKitty Hands-on 1 / 56 Lets meet your hosts! 2 / 56 Lets meet your hosts! Todays speakers Luka Peschke (Objectif Libre) Cloud Consultant / CloudKitty PTL Flavien Hardy (Objectif Libre) Cloud Consultant / CloudKitty contributor
1 / 56
2 / 56
Let’s meet your hosts!
Luka Peschke (Objectif Libre) Cloud Consultant / CloudKitty PTL Flavien Hardy (Objectif Libre) Cloud Consultant / CloudKitty contributor Christophe Sauthier (Objectif Libre) CEO of Objectif Libre / CloudKitty co-father
3 / 56
4 / 56
Today’s tools
5 / 56
Today’s tools
Openstack measurement project Ceilometer (part of the Telemetry project) collects the usage of all resources in an OpenStack cloud. It stores metrics, like CPU and RAM usage, amount of volume storage used…
6 / 56
Today’s tools → Ceilometer
Architecture
Ceilometer is composed of several parts. The main ones are: ceilometer-agent-notification (controller): reads AMQP messages from other components. ceilometer-agent-central (controller): polls some metrics directly. ceilometer-agent-compute (compute node): polls information related to instances.
7 / 56
Today’s tools
Timeseries Database Gnocchi was initially created as a part of the Telemetry project to address Ceilometer’s storage and performance issues. It is independent since March 2017. It stores and aggregates measures for metrics. Gnocchi has a resource notion. Each resource can have several associated metrics. (For example, an instance resource has cpu, vcpus and memory metrics associated). Metadata is linked to resources and not to metrics. Ceilometer publishes measures to Gnocchi (but it does also support other databases).
8 / 56
Today’s tools → Gnocchi
Architecture
Gnocchi is composed of the following parts: An HTTP REST API: Used to push and retrieve data. A processing daemon (gnocchi-metricd): Performs aggregation, metric cleanup… A statsd-compatible daemon (optional): Receives data via TCP rather than the API.
9 / 56
Today’s tools
Rating component for OpenStack and co CloudKitty was initially created in order to allow rating of Ceilometer metrics. Today, CloudKitty can be used with Gnocchi, Monasca and Prometheus. It is possible to use it outside of an OpenStack context since Rocky.
10 / 56
Today’s tools → CloudKitty
Architecture
Storage API Rating module(s) Collector Orchestrator Processor Fetcher
Modular component Fixed component
11 / 56
Today’s tools → CloudKitty CloudKitty works the following way: The fetcher retrieves scopes from which information should be gathered (in our case, these will be projects for openstack and job IDs for prometheus). The collector collects measures from somewhere (gnocchi and prometheus in our case) for the given scopes. The collected data is passed to CloudKitty’s rating module(s) (several rating modules can be used simultaneously). The modules apply user-defined rating rules to the data. We will use the HashMap module for today’s workshop. The rated data is pushed to CloudKitty’s storage backend (InfluxDB in our case).
12 / 56
13 / 56
Installing our components
Slides: Do not open them in your browser or you’ll experience copy/paste issues ! Pick an IP: https://olib.re/denver-ck-handson https://olib.re/denver-ck-handson-ip
14 / 56
Installing our components
Start with SSHing into your instance. The user is centos and the password is d3nv3r. Once you’re connected, identify yourself:
$ source ~/admin.sh
15 / 56
Installing our components
Once you’ve populated the admin tenant, do the same for summit: When you’re done, identify as an admin again:
$ for i in {1..3}; do
done $ source ~/summit.sh $ for i in {1..3}; do
done $ source ~/admin.sh
16 / 56
Installing our components
First, let’s install the RDO Stein repositories.
$ sudo yum remove -y centos-release-ceph-luminous $ sudo yum install -y centos-release-openstack-stein
17 / 56
Installing our components
Daemons: Client: Dashboard: Note Restarting httpd can take a minute: on restart, the service collects and compresses static assets for horizon dashboards.
$ sudo yum -y install openstack-cloudkitty-{api,processor} $ sudo yum -y install python-cloudkittyclient $ sudo yum -y install openstack-cloudkitty-ui $ sudo systemctl restart httpd
18 / 56
Installing our components → Note
Adding CloudKitty to Keystone
$ openstack service create --name cloudkitty rating +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | ef63cce9085443d5b95d9558b6176f90 | | name | cloudkitty | | type | rating | +---------+----------------------------------+
19 / 56
Installing our components → Note
Creating the cloudkitty user
$ openstack user create --project service --password password cloudkitty +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | default_project_id | 8eba65aaa579413fb1dd7ff252caa0a4 | | domain_id | default | | enabled | True | | id | faa6b264724946c7bec4fc64376687a4 | | name | cloudkitty | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack role add --user cloudkitty --project service admin
20 / 56
Installing our components → Note
Creating CloudKitty’s endpoints
$ for i in public internal admin; do
done
21 / 56
Installing our components → Note
Creating CloudKitty’s database
$ mysql -uroot -popenstack << EOF CREATE DATABASE cloudkitty; GRANT ALL PRIVILEGES ON cloudkitty.* TO 'cloudkitty'@'localhost' \ IDENTIFIED BY 'cloudkittydbpassword'; GRANT ALL PRIVILEGES ON cloudkitty.* TO 'cloudkitty'@'%' \ IDENTIFIED BY 'cloudkittydbpassword'; EOF
22 / 56
Installing our components
Note Please edit these files before you copy them: change period = 3600 to period = 300. Most options of this file (keystone_authtoken, transport_url, database…) are classical OpenStack options. However, some of them are specific to cloudkitty. CloudKitty supports several storage drivers (SQLAlchemy in v1, InfluxDB in v2). For this session, we will use the InfluxDB storage driver .
$ sudo cp ~/handson_files/cloudkitty.conf ~/handson_files/cloudkitty-prometheus.conf /etc/cloudkitty/ [storage] backend=influxdb version=2
23 / 56
Installing our components → Note
Configuring CloudKitty (gnocchi)
For the processor collecting information from gnocchi, we’ll use the gnocchi fetcher. This fetcher discovers scopes from unique attributes on a given resouce type. By default, it returns all unique project_id attributes from the generic resource type. The fetcher section contains general fetcher options (which fetcher to use), and fetcher_gnocchi contains options specific to the gnocchi fetcher: We use the gnocchi collector, so we specify authentication options in the collector_gnocchi section.
[fetcher] backend = gnocchi [fetcher_gnocchi] auth_section=authinfos [collect] collector = gnocchi period = 3600 metrics_conf = /etc/cloudkitty/metrics-gnocchi.yml # DO NOT USE IN PRODUCTION wait_periods = 0 [collector_gnocchi] auth_section=authinfos
24 / 56
Installing our components → Note
Configuring CloudKitty (prometheus)
For prometheus, we will use the source fetcher, which reads a list of scopes to process form the configuration file:
[fetcher] backend = source [fetcher_source] # Our sources are job IDs sources = monitoring-1, monitoring-2 [collect] collector = prometheus period = 3600 metrics_conf = /etc/cloudkitty/metrics-prometheus.yml # DO NOT USE IN PRODUCTION wait_periods = 0 # The key at which the scope can be found in prometheus scope_key = job
25 / 56
Installing our components → Note Copy the metric collection configuration files to /etc/cloudkitty:
$ sudo cp ~/handson_files/metrics-gnocchi.yml ~/handson_files/metrics-prometheus.yml /etc/cloudkitty/
26 / 56
Installing our components → Note
Configuring CloudKitty’s metric collection (gnocchi)
Each metric has two lists of attributes: groupby, by which the metrics will be grouped on collection, and which will be indexed in the storage backend, and metadata, which are attributes you may want to base rating rules on. It is possible to specify an alt_name for metrics, which is the name that should be used for rating rule definition. Each collector requires some extra_args. For gnocchi, we specify the aggregation method to use, along with the resource-type associated to the given metric.
# metrics-gnocchi.yml cpu: unit: instance alt_name: instance groupby:
metadata:
mutate: NUMBOOL extra_args: aggregation_method: mean resource_type: instance
27 / 56
Installing our components → Note
Configuring CloudKitty’s metric collection (prometheus)
The prometheus collector also requires an aggregation_method. Here, we want to convert bytes to Mebibytes on collection, so we multiply the collected qty by 1/(1024 * 1024*).
# metrics-prometheus.yml container_memory_usage_bytes: alt_name: container_memory unit: MiB factor: 1/1048576 groupby:
metadata:
extra_args: aggregation_method: max
28 / 56
Installing our components → Note
Initializing CloudKitty’s storage
$ sudo -u cloudkitty cloudkitty-storage-init $ sudo -u cloudkitty cloudkitty-dbsync upgrade
29 / 56
Installing our components → Note
Setup CloudKitty’s API
Copy CloudKitty’s WSGI config file: Restart httpd: Check that CloudKitty’s API is up:
$ sudo mkdir -p /var/www/cloudkitty/ $ sudo cp /usr/lib/python2.7/site-packages/cloudkitty/api/app.wsgi /var/www/cloudkitty/app.wsgi $ sudo chown -R cloudkitty:cloudkitty /var/www/cloudkitty/ $ sudo cp ~/handson_files/cloudkitty-api.conf /etc/httpd/conf.d/cloudkitty-api.conf $ sudo systemctl restart httpd $ cloudkitty module list +-----------+---------+----------+ | Module | Enabled | Priority | +-----------+---------+----------+ | noop | True | 1 | | hashmap | False | 1 | | pyscripts | False | 1 | +-----------+---------+----------+
30 / 56
31 / 56
Pricing policy
We will use CloudKitty’s Hashmap module to define our rating rules. Let’s create the rating rules for prometheus metrics with the CLI:
$ cloudkitty hashmap service create container_memory # alt_name of the container_memory_usage_bytes metric $ cloudkitty hashmap mapping create -s <CONTAINER_MEMORY_SERVICE_ID> 0.01 $ cloudkitty hashmap service create container_filesystem # alt_name of the container_fs_usage_bytes metric $ cloudkitty hashmap mapping create -s <CONTAINER_FILESYSTEM_SERVICE_ID> 0.2
32 / 56
Pricing policy
Log into horizon at http://YOUR_IP with admin / adminpass.
33 / 56
Pricing policy
In order to be taken into account, the Hashmap module needs to be enabled: (Admin -> Rating -> Rating Modules)
34 / 56
Pricing policy
Go to the Hashmap module’s configuration panel (Admin -> Rating -> Hashmap) and create a service called instance. Once it is created, click on it.
35 / 56
Pricing policy
Create a flavor_id field in the instance service. This field will be used to match flavors of running instances. Once it is created, click on it.
36 / 56
Pricing policy
In order to charge running instances based on their flavor, we need to get the IDs of the flavors we want to use.
$ openstack flavor list +----+----------+-----+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+----------+-----+------+-----------+-------+-----------+ | 42 | m1.nano | 64 | 1 | 0 | 1 | True | | 84 | m1.small | 512 | 1 | 0 | 1 | True | +----+----------+-----+------+-----------+-------+-----------+
37 / 56
Pricing policy
Create a mapping that matches the m1.nano flavor. The specified cost is per instance and collect period (3600 seconds). Note that we don’t specify a tenant: this means that the mapping will be applied to all tenants. Once you’re done, create a second mapping with a different price for the m1.small flavor.
38 / 56
Pricing policy
First of all, get the ids of the existent images:
$ openstack image list +--------------------------------------+---------+--------+ | ID | Name | Status | +--------------------------------------+---------+--------+ | 84643723-448a-4d66-9964-6039be65e89d | cirros | active | | c8633f9c-77c6-40a9-a8fd-54b0e1915ac7 | windows | active | +--------------------------------------+---------+--------+
39 / 56
Pricing policy
Go back to the fields section of the instance service, and create an image_ref field. Once it is created, click on it.
40 / 56
Pricing policy
You can create flat mappings based on image_ref the same way as you did for flavor_id. The specified cost will be charged per instance per collect period. First, create a mapping for the cirros image, using its id:
41 / 56
Pricing policy
Create the following mapping (use the id of the windows image) in the image_ref field: Note that this mapping is a rate. We want to charge every instance running windows 15% extra, so we set the cost to 1.15. This means that the cost of every instance running windows will be multiplied by 1.15. (You can also apply discounts if the cost is inferior to 1).
42 / 56
Pricing policy
We will now create the required rules to charge glance image creation. Create an image.size service, like you did for instance.
43 / 56
Pricing policy
We will now create a service-mapping for the image.size service. The cost of this mapping will be 1.0 per MiB (per collect period). Note that we only charge the summit project. Go to the Service Mappings tab in the image.size service.
44 / 56
Pricing policy
The cloudkitty-processor service uses the default cloudkitty.conf file. We’ll run another processor collecting prometheus metrics using tmux By default, the processor starts at the beginning of the month, so you’ll have to wait as few minutes for the processor to catch up with the current day.
$ sudo systemctl start cloudkitty-processor $ tmux # in tmux $ sudo -u cloudkitty cloudkitty-processor --config-file /etc/cloudkitty/cloudkitty-prometheus.conf # Press ctrl+b then d to detach from tmux
45 / 56
Pricing policy
You should now have rating information. You can look at Project -> Rating -> Rating/ Reporting. Given that only the current hour has been rated and that we don’t have much rating information, so the charts may look a bit weird. Don’t worry, they are shiny on a regular cloud!
46 / 56
Pricing policy
Go to Project -> Instance -> Instances and click on start an instance. Select m1.nano flavor and fill out the necessary fields. Once you’re done, go to the Price
47 / 56
Grafana is available at http://YOUR_IP:3000/. User and password are admin.
48 / 56
Creating some grafana dashboards
Once you’re logged into Grafana, click on « New Dashboard »:
49 / 56
Creating some grafana dashboards
Let’s start by creating a piechart showing the cost of the different metrics rated by cloudkitty, with no filters. You’ll have to install a plugin for this:
$ sudo grafana-cli plugins install grafana-piechart-panel $ sudo systemctl restart grafana-server
50 / 56
Creating some grafana dashboards
Click on the add panel button (in the white square) and then on piechart:
51 / 56
Creating some grafana dashboards → Create the piechart panel Once the panel has been created, click on its title and on edit:
52 / 56
Creating some grafana dashboards → Create the piechart panel Do the following steps:
53 / 56
Creating some grafana dashboards
We will create a chart showing the price of each metric. Click on add panel then on chart. Repeat steps 1 to 4 of the previous slide. Then, edit the groupby line to select fill(linear).
54 / 56
Creating some grafana dashboards
Creating new panels Create new panels matching your needs: Price per time and per project, for a given project… Exporting your panel If you wish to keep your panel, it can be exported as a json file to be re-imported later. Click on the share dashboard button next to the save icon: Then, go to the export tab and click on Save to file.
55 / 56
Or later : cloudkitty@objectif-libre.com We’ll be happy to send you the latest release of these slides!
56 / 56