Note
Before proceeding with the manual steps below, verify that you have performed the steps described in Apply maintenance updates.
Pike, Queens
Fixed the issue with the Cinder Grafana dashboard displaying no data for the OpenStack Pike or Queens environments. The issue affected the OpenStack environments deployed with TLS on the internal endpoints.
To display metric graphs for Cinder in the Grafana web UI:
Open your Git project repository with the Reclass model on the cluster level.
In classes/cluster/<cluster_name>/openstack/control.yml
, remove
the osapi
and host
parameters in the cinder:controller
block.
For example:
cinder:
controller:
enabled: true
osapi:
host: 127.0.0.1
Eventually, the cinder:controller
block should look like this:
cinder:
controller:
enabled: true
Log in to the Salt Master node.
Refresh the pillars:
salt "*" saltutil.refresh_pillar
salt "*" state.sls salt.minion.grains
salt "*" mine.update
Apply the telegraf and cinder states on all OpenStack controller nodes:
salt -C "I@cinder:controller" state.sls telegraf,cinder
In classes/cluster/<cluster_name>/openstack/control.yml
,
add the folowing configuration after the line
apache_nova_placement_api_address: ${_param:cluster_local_address}
:
apache_cinder_api_address: ${_param:cluster_local_address}
Refresh the pillars and apply the apache state:
salt -C "I@cinder:controller" saltutil.pillar_refresh
salt -C "I@cinder:controller" state.sls apache
Queens
Fixed the issue that caused the gnocchi.client.resources.v1 state failure on the OpenStack Queens environments with SSL and Barbican. The resolution includes fixes of the alternative names for Barbican and certificate alternative names for FQDN endpoints.
To resolve the gnocchi.client.resources.v1 state failure:
Log in to the Salt Master node.
Apply the Salt formula patch 36685 to your Reclass model.
Refresh the pillars:
salt "*" saltutil.refresh_pillar
salt "*" state.sls salt.minion.grains
salt "*" mine.update
Apply salt.minion.cert
and restart apache2
:
salt -C 'I@barbican:server' state.apply salt:minion:cert
salt -C 'I@barbican:server' cmd.run 'systemctl restart apache2' -b 1
Apply the Salt formula patch 36686 to your Cookiecutter templates.
Refresh the pillars:
salt "*" saltutil.refresh_pillar
salt "*" state.sls salt.minion.grains
salt "*" mine.update
Apply salt.minion.cert
and restart apache2
:
salt -C 'I@gnocchi:server' state.apply salt:minion:cert
salt -C 'I@gnocchi:server' cmd.run 'systemctl restart apache2' -b 1
Pike, Queens
Fixed the issue with insufficient OVS timeouts causing
instance traffic losses. Now, if you receive the OVS timeout errors
in the neutron-openvswitch-agent
logs,
such as ofctl request <...> timed out: Timeout: 10 seconds
or
Commands [<ovsdbap...>] exceeded timeout 10 seconds
,
you can configure the OVS timeout parameters as required
depending on the number of the OVS ports on the gtw
in your cloud.
For example, if you have more than 1000 ports per a gtw
node,
Mirantis recommends changing the OVS timeouts as described below.
The same procedure can be applied to the compute nodes if required.
To increase OVS timeouts on the gateway nodes:
Log in to the Salt Master node.
Open
/srv/salt/reclass/classes/cluster/<cluster_name>/openstack/gateway.yml
for editing.
Add the following snippet to the parameters
section of the file
with the required values.
neutron:
gateway:
of_connect_timeout: 60
of_request_timeout: 30
ovs_vsctl_timeout: 30 # Pike
ovsdb_timeout: 30 # Queens and beyond
Apply the following state:
salt -C 'I@neutron:gateway' state.sls neutron
Verify whether the Open vSwitch logs contain the Datapath Invalid and no response to inactivity probe errors:
In the neutron-openvswitch-agent
logs, for example:
ERROR ... ofctl request <...> error Datapath Invalid 64183592930369: \
InvalidDatapath: Datapath Invalid 64183592930369
In openvswitch/ovs-vswitchd.log
, for example:
ERR|br-tun<->tcp:127.0.0.1:6633: no response to inactivity probe \
after 5 seconds, disconnecting
If the logs contain such errors, increase inactivity probes for the OVS bridge controllers:
Log in to any gtw
node.
Run the following commands:
ovs-vsctl set controller br-int inactivity_probe=60000
ovs-vsctl set controller br-tun inactivity_probe=60000
ovs-vsctl set controller br-floating inactivity_probe=60000
To increase OVS timeouts on the compute nodes:
Log in to the Salt Master node.
Open
/srv/salt/reclass/classes/cluster/<cluster_name>/openstack/compute.yml
for editing.
Add the following snippet to the parameters
section of the file
with the required values.
neutron:
compute:
of_connect_timeout: 60
of_request_timeout: 30
ovs_vsctl_timeout: 30 # Pike
ovsdb_timeout: 30 # Queens and beyond
Apply the following state:
salt -C 'I@neutron:compute' state.sls neutron
Verify whether the Open vSwitch logs contain the Datapath Invalid and no response to inactivity probe errors:
In the neutron-openvswitch-agent
logs, for example:
ERROR ... ofctl request <...> error Datapath Invalid 64183592930369: \
InvalidDatapath: Datapath Invalid 64183592930369
In openvswitch/ovs-vswitchd.log
, for example:
ERR|br-tun<->tcp:127.0.0.1:6633: no response to inactivity probe \
after 5 seconds, disconnecting
If the logs contain such errors, increase inactivity probes for the OVS bridge controllers:
Log in to the target cmp
node.
Run the following commands:
ovs-vsctl set controller br-int inactivity_probe=60000
ovs-vsctl set controller br-tun inactivity_probe=60000
ovs-vsctl set controller br-floating inactivity_probe=60000