Once you have performed the steps described in Prepare the cluster deployment model, deploy Tenant Telemetry on an existing MCP cluster as described below.
To deploy Tenant Telemetry on an existing MCP cluster:
Log in to the Salt Master node.
Depending on the type of the aggregation metrics storage, select from the following options:
For Ceph, deploy the newly created users and pools:
salt -C "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" saltutil.refresh_pillar
salt -C "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" state.sls ceph.mon
salt -C "I@ceph:mon:keyring:mon or I@ceph:common:keyring:admin" mine.update
salt -C "I@ceph:mon" state.sls 'ceph.mon'
salt -C "I@ceph:setup" state.sls ceph.setup
salt -C "I@ceph:osd or I@ceph:osd or I@ceph:radosgw" state.sls ceph.setup.keyring
For the file backend with GlusterFS, deploy the Gnocchi GlusterFS configuration:
salt -C "I@glusterfs:server" saltutil.refresh_pillar
salt -C "I@glusterfs:server" state.sls glusterfs
Run the following commands to generate definitions under
/srv/salt/reclass/nodes/_generated
:
salt-call saltutil.refresh_pillar
salt-call state.sls reclass.storage
Verify that the following files were created:
ls -1 /srv/salt/reclass/nodes/_generated | grep mdb
mdb01.domain.name
mdb02.domain.name
mdb03.domain.name
Create the mdb
VMs:
salt -C 'I@salt:control' saltutil.refresh_pillar
salt -C 'I@salt:control' state.sls salt.control
Verify that the mdb
nodes were successfully registered on the Salt
Master node:
salt-key -L | grep mdb
mdb01.domain.name
mdb02.domain.name
mdb03.domain.name
Create endpoints:
Create additional endpoints for Panko and Gnocchi and update the existing Ceilometer and Aodh endpoints, if any:
salt -C 'I@keystone:client' saltutil.refresh_pillar
salt -C 'I@keystone:client' state.sls keystone.client
Verify the created endpoints:
salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service ceilometer'
salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service aodh'
salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service panko'
salt -C 'I@keystone:client' cmd.run '. /root/keystonercv3 ; openstack endpoint list --service gnocchi'
Optional. Install the Panko client if you have defined it in the cluster model:
salt -C 'I@keystone:server' saltutil.refresh_pillar
salt -C 'I@keystone:server' state.sls linux.system.package
Create databases:
Create databases for Panko and Gnocchi:
salt -C 'I@galera:master or I@galera:slave' saltutil.refresh_pillar
salt -C 'I@galera:master' state.sls galera
salt -C 'I@galera:slave' state.sls galera
Verify that the databases were successfully created:
salt -C 'I@galera:master' cmd.run 'mysql --defaults-extra-file=/etc/mysql/debian.cnf -e "show databases;"'
salt -C 'I@galera:master' cmd.run 'mysql --defaults-extra-file=/etc/mysql/debian.cnf -e "select User from mysql.user;"'
Update the NGINX configuration on the prx
nodes:
salt prx\* saltutil.refresh_pillar
salt prx\* state.sls nginx
Disable the Ceilometer and Aodh services deployed on the ctl
nodes:
for service in aodh-evaluator aodh-listener aodh-notifier \
ceilometer-agent-central ceilometer-agent-notification \
ceilometer_collector
do
salt ctl\* service.stop $service
salt ctl\* service.disable $service
done
Provision the mdb
nodes:
Apply the basic states for the mdb
nodes:
salt mdb\* saltutil.refresh_pillar
salt mdb\* saltutil.sync_all
salt mdb\* state.sls linux.system
salt-call state.sls salt.minion.ca
salt mdb\* state.sls linux,ntp,openssh,salt.minion
salt mdb\* system.reboot --async
Install basic services on the mdb
nodes:
salt mdb01\* state.sls keepalived
salt mdb\* state.sls keepalived
salt mdb\* state.sls haproxy
salt mdb\* state.sls memcached
salt mdb\* state.sls apache
Install packages depending on the aggregation metrics storage:
For Ceph:
salt mdb\* state.sls ceph.common,ceph.setup.keyring
For the file backend with GlusterFS:
salt mdb\* state.sls glusterfs
Install the Redis, Gnocchi, Panko, Ceilometer, and Aodh services on
mdb
nodes:
salt -C 'I@redis:cluster:role:master' state.sls redis
salt -C 'I@redis:server' state.sls redis
salt -C 'I@gnocchi:server:role:primary' state.sls gnocchi
salt -C 'I@gnocchi:server' state.sls gnocchi
salt -C 'I@gnocchi:client' state.sls gnocchi.client -b 1
salt -C 'I@panko:server:role:primary' state.sls panko
salt -C 'I@panko:server' state.sls panko
salt -C 'I@ceilometer:server:role:primary' state.sls ceilometer
salt -C 'I@ceilometer:server' state.sls ceilometer
salt -C 'I@aodh:server:role:primary' state.sls aodh
salt -C 'I@aodh:server' state.sls aodh
Update the cluster nodes:
Verify that the mdb
nodes were added to /etc/hosts
on every
node:
salt '*' saltutil.refresh_pillar
salt '*' state.sls linux.network.host
For Ceph, run:
salt -C 'I@ceph:common and not mon*' state.sls ceph.setup.keyring
Verify that the Ceilometer agent is deployed and up to date:
salt -C 'I@ceilometer:agent' state.sls salt.minion
salt -C 'I@ceilometer:agent' state.sls ceilometer
Apply the configuration for Nova messaging notifications on the OpenStack controller nodes:
salt -C 'I@nova:controller' state.sls nova.controller -b 1
Update the StackLight LMA configuration:
salt mdb\* state.sls telegraf
salt mdb\* state.sls fluentd
salt '*' state.sls salt.minion.grains
salt '*' saltutil.refresh_modules
salt '*' mine.update
salt -C 'I@docker:swarm and I@prometheus:server' state.sls prometheus
salt -C 'I@sphinx:server' state.sls sphinx
Verify Tenant Telemetry:
Note
Metrics will be collected for the newly created resources. Therefore, launch an instance or create a volume before executing the commands below.
Verify that metrics are available:
salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack metric list --limit 50'
If you have installed the Panko client on the ctl
nodes, verify that
events are available:
salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack event list --limit 20'
Verify that the Aodh endpoint is available:
salt ctl01\* cmd.run '. /root/keystonercv3 ; openstack --debug alarm list'
The output will not contain any alarm because no alarm was created yet.
For Ceph, verify that metrics are saved to the Ceph pool
(telemtry_pool
for the cloud):
salt cmn01\* cmd.run 'rados df'