CloudKitty Hands-on 1 / 56 Lets meet your hosts! 2 / 56 Lets meet - - PowerPoint PPT Presentation

cloudkitty hands on
SMART_READER_LITE
LIVE PREVIEW

CloudKitty Hands-on 1 / 56 Lets meet your hosts! 2 / 56 Lets meet - - PowerPoint PPT Presentation

CloudKitty Hands-on 1 / 56 Lets meet your hosts! 2 / 56 Lets meet your hosts! Todays speakers Luka Peschke (Objectif Libre) Cloud Consultant / CloudKitty PTL Flavien Hardy (Objectif Libre) Cloud Consultant / CloudKitty contributor


slide-1
SLIDE 1

CloudKitty Hands-on

1 / 56

slide-2
SLIDE 2

Let’s meet your hosts!

2 / 56

slide-3
SLIDE 3

Let’s meet your hosts!

Today’s speakers

Luka Peschke (Objectif Libre) Cloud Consultant / CloudKitty PTL Flavien Hardy (Objectif Libre) Cloud Consultant / CloudKitty contributor Christophe Sauthier (Objectif Libre) CEO of Objectif Libre / CloudKitty co-father

3 / 56

slide-4
SLIDE 4

Today’s tools

4 / 56

slide-5
SLIDE 5

Today’s tools

5 / 56

slide-6
SLIDE 6

Today’s tools

Ceilometer

Openstack measurement project Ceilometer (part of the Telemetry project) collects the usage of all resources in an OpenStack cloud. It stores metrics, like CPU and RAM usage, amount of volume storage used…

6 / 56

slide-7
SLIDE 7

Today’s tools → Ceilometer

Architecture

Ceilometer is composed of several parts. The main ones are: ceilometer-agent-notification (controller): reads AMQP messages from other components. ceilometer-agent-central (controller): polls some metrics directly. ceilometer-agent-compute (compute node): polls information related to instances.

7 / 56

slide-8
SLIDE 8

Today’s tools

Gnocchi

Timeseries Database Gnocchi was initially created as a part of the Telemetry project to address Ceilometer’s storage and performance issues. It is independent since March 2017. It stores and aggregates measures for metrics. Gnocchi has a resource notion. Each resource can have several associated metrics. (For example, an instance resource has cpu, vcpus and memory metrics associated). Metadata is linked to resources and not to metrics. Ceilometer publishes measures to Gnocchi (but it does also support other databases).

8 / 56

slide-9
SLIDE 9

Today’s tools → Gnocchi

Architecture

Gnocchi is composed of the following parts: An HTTP REST API: Used to push and retrieve data. A processing daemon (gnocchi-metricd): Performs aggregation, metric cleanup… A statsd-compatible daemon (optional): Receives data via TCP rather than the API.

9 / 56

slide-10
SLIDE 10

Today’s tools

CloudKitty

Rating component for OpenStack and co CloudKitty was initially created in order to allow rating of Ceilometer metrics. Today, CloudKitty can be used with Gnocchi, Monasca and Prometheus. It is possible to use it outside of an OpenStack context since Rocky.

10 / 56

slide-11
SLIDE 11

Today’s tools → CloudKitty

Architecture

Storage API Rating module(s) Collector Orchestrator Processor Fetcher

Modular component Fixed component

11 / 56

slide-12
SLIDE 12

Today’s tools → CloudKitty CloudKitty works the following way: The fetcher retrieves scopes from which information should be gathered (in our case, these will be projects for openstack and job IDs for prometheus). The collector collects measures from somewhere (gnocchi and prometheus in our case) for the given scopes. The collected data is passed to CloudKitty’s rating module(s) (several rating modules can be used simultaneously). The modules apply user-defined rating rules to the data. We will use the HashMap module for today’s workshop. The rated data is pushed to CloudKitty’s storage backend (InfluxDB in our case).

12 / 56

slide-13
SLIDE 13

Installing our components

13 / 56

slide-14
SLIDE 14

Installing our components

Get your browser

Slides: Do not open them in your browser or you’ll experience copy/paste issues ! Pick an IP: https://olib.re/denver-ck-handson https://olib.re/denver-ck-handson-ip

14 / 56

slide-15
SLIDE 15

Installing our components

SSH into your instance

Start with SSHing into your instance. The user is centos and the password is d3nv3r. Once you’re connected, identify yourself:

$ source ~/admin.sh

15 / 56

slide-16
SLIDE 16

Installing our components

Create some data

Once you’ve populated the admin tenant, do the same for summit: When you’re done, identify as an admin again:

$ for i in {1..3}; do

  • penstack server create --image cirros --flavor m1.nano instance${i}
  • penstack server create --image windows --flavor m1.nano instance-win${i}
  • penstack image create --file ~/handson_files/cirros.img image${i}

done $ source ~/summit.sh $ for i in {1..3}; do

  • penstack server create --image cirros --flavor m1.nano instance${i}
  • penstack server create --image windows --flavor m1.nano instance-win${i}
  • penstack image create --file ~/handson_files/cirros.img image${i}

done $ source ~/admin.sh

16 / 56

slide-17
SLIDE 17

Installing our components

Getting the Stein RDO repositories

First, let’s install the RDO Stein repositories.

$ sudo yum remove -y centos-release-ceph-luminous $ sudo yum install -y centos-release-openstack-stein

17 / 56

slide-18
SLIDE 18

Installing our components

Installing CloudKitty, its Horizon dashboard & client

Daemons: Client: Dashboard: Note Restarting httpd can take a minute: on restart, the service collects and compresses static assets for horizon dashboards.

$ sudo yum -y install openstack-cloudkitty-{api,processor} $ sudo yum -y install python-cloudkittyclient $ sudo yum -y install openstack-cloudkitty-ui $ sudo systemctl restart httpd

18 / 56

slide-19
SLIDE 19

Installing our components → Note

Adding CloudKitty to Keystone

$ openstack service create --name cloudkitty rating +---------+----------------------------------+ | Field | Value | +---------+----------------------------------+ | enabled | True | | id | ef63cce9085443d5b95d9558b6176f90 | | name | cloudkitty | | type | rating | +---------+----------------------------------+

19 / 56

slide-20
SLIDE 20

Installing our components → Note

Creating the cloudkitty user

$ openstack user create --project service --password password cloudkitty +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | default_project_id | 8eba65aaa579413fb1dd7ff252caa0a4 | | domain_id | default | | enabled | True | | id | faa6b264724946c7bec4fc64376687a4 | | name | cloudkitty | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+ $ openstack role add --user cloudkitty --project service admin

20 / 56

slide-21
SLIDE 21

Installing our components → Note

Creating CloudKitty’s endpoints

$ for i in public internal admin; do

  • penstack endpoint create --region RegionOne rating $i http://127.0.0.1:8889/

done

21 / 56

slide-22
SLIDE 22

Installing our components → Note

Creating CloudKitty’s database

$ mysql -uroot -popenstack << EOF CREATE DATABASE cloudkitty; GRANT ALL PRIVILEGES ON cloudkitty.* TO 'cloudkitty'@'localhost' \ IDENTIFIED BY 'cloudkittydbpassword'; GRANT ALL PRIVILEGES ON cloudkitty.* TO 'cloudkitty'@'%' \ IDENTIFIED BY 'cloudkittydbpassword'; EOF

22 / 56

slide-23
SLIDE 23

Installing our components

Configuring CloudKitty

Note Please edit these files before you copy them: change period = 3600 to period = 300. Most options of this file (keystone_authtoken, transport_url, database…) are classical OpenStack options. However, some of them are specific to cloudkitty. CloudKitty supports several storage drivers (SQLAlchemy in v1, InfluxDB in v2). For this session, we will use the InfluxDB storage driver .

$ sudo cp ~/handson_files/cloudkitty.conf ~/handson_files/cloudkitty-prometheus.conf /etc/cloudkitty/ [storage] backend=influxdb version=2

23 / 56

slide-24
SLIDE 24

Installing our components → Note

Configuring CloudKitty (gnocchi)

For the processor collecting information from gnocchi, we’ll use the gnocchi fetcher. This fetcher discovers scopes from unique attributes on a given resouce type. By default, it returns all unique project_id attributes from the generic resource type. The fetcher section contains general fetcher options (which fetcher to use), and fetcher_gnocchi contains options specific to the gnocchi fetcher: We use the gnocchi collector, so we specify authentication options in the collector_gnocchi section.

[fetcher] backend = gnocchi [fetcher_gnocchi] auth_section=authinfos [collect] collector = gnocchi period = 3600 metrics_conf = /etc/cloudkitty/metrics-gnocchi.yml # DO NOT USE IN PRODUCTION wait_periods = 0 [collector_gnocchi] auth_section=authinfos

24 / 56

slide-25
SLIDE 25

Installing our components → Note

Configuring CloudKitty (prometheus)

For prometheus, we will use the source fetcher, which reads a list of scopes to process form the configuration file:

[fetcher] backend = source [fetcher_source] # Our sources are job IDs sources = monitoring-1, monitoring-2 [collect] collector = prometheus period = 3600 metrics_conf = /etc/cloudkitty/metrics-prometheus.yml # DO NOT USE IN PRODUCTION wait_periods = 0 # The key at which the scope can be found in prometheus scope_key = job

25 / 56

slide-26
SLIDE 26

Installing our components → Note Copy the metric collection configuration files to /etc/cloudkitty:

$ sudo cp ~/handson_files/metrics-gnocchi.yml ~/handson_files/metrics-prometheus.yml /etc/cloudkitty/

26 / 56

slide-27
SLIDE 27

Installing our components → Note

Configuring CloudKitty’s metric collection (gnocchi)

Each metric has two lists of attributes: groupby, by which the metrics will be grouped on collection, and which will be indexed in the storage backend, and metadata, which are attributes you may want to base rating rules on. It is possible to specify an alt_name for metrics, which is the name that should be used for rating rule definition. Each collector requires some extra_args. For gnocchi, we specify the aggregation method to use, along with the resource-type associated to the given metric.

# metrics-gnocchi.yml cpu: unit: instance alt_name: instance groupby:

  • id
  • user_id
  • project_id

metadata:

  • flavor_name
  • flavor_id
  • vcpus

mutate: NUMBOOL extra_args: aggregation_method: mean resource_type: instance

27 / 56

slide-28
SLIDE 28

Installing our components → Note

Configuring CloudKitty’s metric collection (prometheus)

The prometheus collector also requires an aggregation_method. Here, we want to convert bytes to Mebibytes on collection, so we multiply the collected qty by 1/(1024 * 1024*).

# metrics-prometheus.yml container_memory_usage_bytes: alt_name: container_memory unit: MiB factor: 1/1048576 groupby:

  • job
  • id

metadata:

  • name
  • image

extra_args: aggregation_method: max

28 / 56

slide-29
SLIDE 29

Installing our components → Note

Initializing CloudKitty’s storage

$ sudo -u cloudkitty cloudkitty-storage-init $ sudo -u cloudkitty cloudkitty-dbsync upgrade

29 / 56

slide-30
SLIDE 30

Installing our components → Note

Setup CloudKitty’s API

Copy CloudKitty’s WSGI config file: Restart httpd: Check that CloudKitty’s API is up:

$ sudo mkdir -p /var/www/cloudkitty/ $ sudo cp /usr/lib/python2.7/site-packages/cloudkitty/api/app.wsgi /var/www/cloudkitty/app.wsgi $ sudo chown -R cloudkitty:cloudkitty /var/www/cloudkitty/ $ sudo cp ~/handson_files/cloudkitty-api.conf /etc/httpd/conf.d/cloudkitty-api.conf $ sudo systemctl restart httpd $ cloudkitty module list +-----------+---------+----------+ | Module | Enabled | Priority | +-----------+---------+----------+ | noop | True | 1 | | hashmap | False | 1 | | pyscripts | False | 1 | +-----------+---------+----------+

30 / 56

slide-31
SLIDE 31

Pricing policy

31 / 56

slide-32
SLIDE 32

Pricing policy

With the CLI

We will use CloudKitty’s Hashmap module to define our rating rules. Let’s create the rating rules for prometheus metrics with the CLI:

$ cloudkitty hashmap service create container_memory # alt_name of the container_memory_usage_bytes metric $ cloudkitty hashmap mapping create -s <CONTAINER_MEMORY_SERVICE_ID> 0.01 $ cloudkitty hashmap service create container_filesystem # alt_name of the container_fs_usage_bytes metric $ cloudkitty hashmap mapping create -s <CONTAINER_FILESYSTEM_SERVICE_ID> 0.2

32 / 56

slide-33
SLIDE 33

Pricing policy

Use horizon to define rating rules

Log into horizon at http://YOUR_IP with admin / adminpass.

33 / 56

slide-34
SLIDE 34

Pricing policy

Enable the Hashmap module

In order to be taken into account, the Hashmap module needs to be enabled: (Admin -> Rating -> Rating Modules)

34 / 56

slide-35
SLIDE 35

Pricing policy

Create the instance service

Go to the Hashmap module’s configuration panel (Admin -> Rating -> Hashmap) and create a service called instance. Once it is created, click on it.

35 / 56

slide-36
SLIDE 36

Pricing policy

Create a flavor_id field

Create a flavor_id field in the instance service. This field will be used to match flavors of running instances. Once it is created, click on it.

36 / 56

slide-37
SLIDE 37

Pricing policy

Add Some Field Mappings [1/2]

In order to charge running instances based on their flavor, we need to get the IDs of the flavors we want to use.

$ openstack flavor list +----+----------+-----+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +----+----------+-----+------+-----------+-------+-----------+ | 42 | m1.nano | 64 | 1 | 0 | 1 | True | | 84 | m1.small | 512 | 1 | 0 | 1 | True | +----+----------+-----+------+-----------+-------+-----------+

37 / 56

slide-38
SLIDE 38

Pricing policy

Add Some Field Mappings [2/2]

Create a mapping that matches the m1.nano flavor. The specified cost is per instance and collect period (3600 seconds). Note that we don’t specify a tenant: this means that the mapping will be applied to all tenants. Once you’re done, create a second mapping with a different price for the m1.small flavor.

38 / 56

slide-39
SLIDE 39

Pricing policy

Apply a per-image price [1/3]

First of all, get the ids of the existent images:

$ openstack image list +--------------------------------------+---------+--------+ | ID | Name | Status | +--------------------------------------+---------+--------+ | 84643723-448a-4d66-9964-6039be65e89d | cirros | active | | c8633f9c-77c6-40a9-a8fd-54b0e1915ac7 | windows | active | +--------------------------------------+---------+--------+

39 / 56

slide-40
SLIDE 40

Pricing policy

Apply a per-image price [2/3]

Go back to the fields section of the instance service, and create an image_ref field. Once it is created, click on it.

40 / 56

slide-41
SLIDE 41

Pricing policy

Apply a per-image price [3/3]

You can create flat mappings based on image_ref the same way as you did for flavor_id. The specified cost will be charged per instance per collect period. First, create a mapping for the cirros image, using its id:

41 / 56

slide-42
SLIDE 42

Pricing policy

Charge extra for instances running windows

Create the following mapping (use the id of the windows image) in the image_ref field: Note that this mapping is a rate. We want to charge every instance running windows 15% extra, so we set the cost to 1.15. This means that the cost of every instance running windows will be multiplied by 1.15. (You can also apply discounts if the cost is inferior to 1).

42 / 56

slide-43
SLIDE 43

Pricing policy

Create the image.size service

We will now create the required rules to charge glance image creation. Create an image.size service, like you did for instance.

43 / 56

slide-44
SLIDE 44

Pricing policy

Create a service-mapping for image

We will now create a service-mapping for the image.size service. The cost of this mapping will be 1.0 per MiB (per collect period). Note that we only charge the summit project. Go to the Service Mappings tab in the image.size service.

44 / 56

slide-45
SLIDE 45

Pricing policy

Start CloudKitty’s processor

The cloudkitty-processor service uses the default cloudkitty.conf file. We’ll run another processor collecting prometheus metrics using tmux By default, the processor starts at the beginning of the month, so you’ll have to wait as few minutes for the processor to catch up with the current day.

$ sudo systemctl start cloudkitty-processor $ tmux # in tmux $ sudo -u cloudkitty cloudkitty-processor --config-file /etc/cloudkitty/cloudkitty-prometheus.conf # Press ctrl+b then d to detach from tmux

45 / 56

slide-46
SLIDE 46

Pricing policy

Rating information

You should now have rating information. You can look at Project -> Rating -> Rating/ Reporting. Given that only the current hour has been rated and that we don’t have much rating information, so the charts may look a bit weird. Don’t worry, they are shiny on a regular cloud!

46 / 56

slide-47
SLIDE 47

Pricing policy

Predictive Pricing

Go to Project -> Instance -> Instances and click on start an instance. Select m1.nano flavor and fill out the necessary fields. Once you’re done, go to the Price

  • tab. You should have something like this:

47 / 56

slide-48
SLIDE 48

Grafana is available at http://YOUR_IP:3000/. User and password are admin.

Creating some grafana dashboards

48 / 56

slide-49
SLIDE 49

Creating some grafana dashboards

Create a dashboard for cloudkitty

Once you’re logged into Grafana, click on « New Dashboard »:

49 / 56

slide-50
SLIDE 50

Creating some grafana dashboards

Rating OpenStack

Let’s start by creating a piechart showing the cost of the different metrics rated by cloudkitty, with no filters. You’ll have to install a plugin for this:

$ sudo grafana-cli plugins install grafana-piechart-panel $ sudo systemctl restart grafana-server

50 / 56

slide-51
SLIDE 51

Creating some grafana dashboards

Create the piechart panel

Click on the add panel button (in the white square) and then on piechart:

51 / 56

slide-52
SLIDE 52

Creating some grafana dashboards → Create the piechart panel Once the panel has been created, click on its title and on edit:

52 / 56

slide-53
SLIDE 53

Creating some grafana dashboards → Create the piechart panel Do the following steps:

  • 1. Select the InfluxDB datasource
  • 2. Edit the query to select from the dataframes measurement
  • 3. Select from the price field.
  • 4. Remove mean() and add sum() (click on + -> aggregations -> sum)
  • 5. Group by type.
  • 6. (Optional) play with aesthetics in the options tab.
  • 7. Go back to the dashboard by clicking on the blue arrow.

53 / 56

slide-54
SLIDE 54

Creating some grafana dashboards

Create a chart panel

We will create a chart showing the price of each metric. Click on add panel then on chart. Repeat steps 1 to 4 of the previous slide. Then, edit the groupby line to select fill(linear).

54 / 56

slide-55
SLIDE 55

Creating some grafana dashboards

Next steps

Creating new panels Create new panels matching your needs: Price per time and per project, for a given project… Exporting your panel If you wish to keep your panel, it can be exported as a json file to be re-imported later. Click on the share dashboard button next to the save icon: Then, go to the export tab and click on Save to file.

55 / 56

slide-56
SLIDE 56

Or later : cloudkitty@objectif-libre.com We’ll be happy to send you the latest release of these slides!

It is time for questions!

56 / 56