Deploy Tenant Telemetry

Deploy Tenant TelemetryΒΆ

Once you have performed the steps described in Prepare the cluster deployment model, deploy Tenant Telemetry on an existing MCP cluster as described below.

To deploy Tenant Telemetry on an existing MCP cluster:

  1. Log in to the Salt Master node.

  2. Depending on the type of the aggregation metrics storage, select from the following options:

    • For Ceph, deploy the newly created users and pools:

      salt -C "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" saltutil.refresh_pillar
      salt -C "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" state.sls ceph.mon
      salt -C "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" mine.update
      salt -C "I@ceph:mon" state.sls 'ceph.mon'
      salt -C "I@ceph:setup" state.sls ceph.setup
      salt -C "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" state.sls ceph.setup.keyring
      
    • For the file back end with GlusterFS, deploy the Gnocchi GlusterFS configuration:

      salt -C "I@glusterfs:server" saltutil.refresh_pillar
      salt -C "I@glusterfs:server" state.sls glusterfs
      
  3. Run the following commands to generate definitions under /srv/salt/reclass/nodes/_generated:

    salt-call saltutil.refresh_pillar
    salt-call state.sls reclass.storage
    
  4. Verify that the following files were created:

    ls -1 /srv/salt/reclass/nodes/_generated | grep mdb
    mdb01.domain.name
    mdb02.domain.name
    mdb03.domain.name
    
  5. Create the mdb VMs:

    salt -C 'I@salt:control' saltutil.refresh_pillar
    salt -C 'I@salt:control' state.sls salt.control
    
  6. Verify that the mdb nodes were successfully registered on the Salt Master node:

    salt-key -L | grep mdb
    mdb01.domain.name
    mdb02.domain.name
    mdb03.domain.name
    
  7. Create endpoints:

    1. Create additional endpoints for Panko and Gnocchi and update the existing Ceilometer and Aodh endpoints, if any:

      salt -C 'I@keystone:client' saltutil.refresh_pillar
      salt -C 'I@keystone:client' state.sls keystone.client
      
    2. Verify the created endpoints:

      salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service ceilometer'
      salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service aodh'
      salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service panko'
      salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service gnocchi'
      
    3. Optional. Install the Panko client if you have defined it in the cluster model:

      salt -C 'I@keystone:server' saltutil.refresh_pillar
      salt -C 'I@keystone:server' state.sls linux.system.package
      
  8. Create databases:

    1. Create databases for Panko and Gnocchi:

      salt -C 'I@galera:master or I@galera:slave' saltutil.refresh_pillar
      salt -C 'I@galera:master' state.sls galera
      salt -C 'I@galera:slave' state.sls galera
      
    2. Verify that the databases were successfully created:

      salt -C 'I@galera:master' cmd.run 'mysql --defaults-extra-file=/etc/mysql/debian.cnf -e "show databases;"'
      salt -C 'I@galera:master' cmd.run 'mysql --defaults-extra-file=/etc/mysql/debian.cnf -e "select User from mysql.user;"'
      
  9. Update the NGINX configuration on the prx nodes:

    salt prx\* saltutil.refresh_pillar
    salt prx\* state.sls nginx
    
  10. Disable the Ceilometer and Aodh services deployed on the ctl nodes:

    for service in aodh-evaluator aodh-listener aodh-notifier \
      ceilometer-agent-central ceilometer-agent-notification \
      ceilometer_collector
    do
    salt ctl\* service.stop $service
    salt ctl\* service.disable $service
    done
    
  11. Provision the mdb nodes:

    1. Apply the basic states for the mdb nodes:

      salt mdb\* saltutil.refresh_pillar
      salt mdb\* saltutil.sync_all
      salt mdb\* state.sls linux.system
      salt-call state.sls salt.minion.ca
      salt mdb\* state.sls linux,ntp,openssh,salt.minion
      salt mdb\* system.reboot --async
      
    2. Install basic services on the mdb nodes:

      salt mdb01\* state.sls keepalived
      salt mdb\* state.sls keepalived
      salt mdb\* state.sls haproxy
      salt mdb\* state.sls memcached
      salt mdb\* state.sls apache
      
    3. Install packages depending on the aggregation metrics storage:

      • For Ceph:

        salt mdb\* state.sls ceph.common,ceph.setup.keyring
        
      • For the file back end with GlusterFS:

        salt mdb\* state.sls glusterfs
        
    4. Install the Redis, Gnocchi, Panko, Ceilometer, and Aodh services on mdb nodes:

      salt -C 'I@redis:cluster:role:master' state.sls redis
      salt -C 'I@redis:server' state.sls redis
      salt -C 'I@gnocchi:server:role:primary' state.sls gnocchi
      salt -C 'I@gnocchi:server' state.sls gnocchi
      salt -C 'I@gnocchi:client' state.sls gnocchi.client -b 1
      salt -C 'I@panko:server:role:primary' state.sls panko
      salt -C 'I@panko:server' state.sls panko
      salt -C 'I@ceilometer:server:role:primary' state.sls ceilometer
      salt -C 'I@ceilometer:server' state.sls ceilometer
      salt -C 'I@aodh:server:role:primary' state.sls aodh
      salt -C 'I@aodh:server' state.sls aodh
      
    5. Update the cluster nodes:

      1. Verify that the mdb nodes were added to /etc/hosts on every node:

        salt '*' saltutil.refresh_pillar
        salt '*' state.sls linux.network.host
        
      2. For Ceph, run:

        salt -C 'I@ceph:common and not mon*' state.sls ceph.setup.keyring
        
    6. Verify that the Ceilometer agent is deployed and up to date:

      salt -C 'I@ceilometer:agent' state.sls salt.minion
      salt -C 'I@ceilometer:agent' state.sls ceilometer
      
    7. Apply the configuration for Nova messaging notifications on the OpenStack controller nodes:

      salt -C 'I@nova:controller' state.sls nova.controller -b 1
      
    8. Update the StackLight LMA configuration:

      salt mdb\* state.sls telegraf
      salt mdb\* state.sls fluentd
      salt '*' state.sls salt.minion.grains
      salt '*' saltutil.refresh_modules
      salt '*' mine.update
      salt -C 'I@docker:swarm and I@prometheus:server' state.sls prometheus
      salt -C 'I@sphinx:server' state.sls sphinx
      
  12. Verify Tenant Telemetry:

    Note

    Metrics will be collected for the newly created resources. Therefore, launch an instance or create a volume before executing the commands below.

    1. Verify that metrics are available:

      salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack metric list --limit 50'
      
    2. If you have installed the Panko client on the ctl nodes, verify that events are available:

      salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack event list --limit 20'
      
    3. Verify that the Aodh endpoint is available:

      salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack --debug alarm list'
      

      The output will not contain any alarm because no alarm was created yet.

    4. For Ceph, verify that metrics are saved to the Ceph pool (telemtry_pool for the cloud):

      salt cmn01\* cmd.run 'rados df'