Note
Before proceeding with the manual steps below, verify that you have performed the steps described in Apply maintenance updates.
Pike, Queens
Fixed the issue that caused the MySQL server node failure after it desynced itself from the Galera cluster.
To apply the issue resolution:
Log in to the Salt Master node.
Restart the MySQL service on every database server node, one by one.
For example:
salt 'dbs03*' cmd.run 'systemctl restart mysql'
Verify that every node loaded the updated Galera provider.
For example:
salt 'dbs*' mysql.status | grep -A1 wsrep_provider_version
Example of system response:
wsrep_provider_version:
3.20(r7e383f7)
--
wsrep_provider_version:
3.20(r7e383f7)
--
wsrep_provider_version:
3.20(r7e383f7)
Queens
Implemented the ability to set the ionice level for the ephemeral LVM volume
shred
operation in nova-compute
to prevent excessive disk
consumption. Setting of the ionice level described below makes sense if:
nova:compute:lvm:ephemeral
is set to True
nova:compute:lvm:volume_clear
is set to zero
or shred
To apply the issue resolution:
Log in to the Salt Master node.
In classes/cluster/<cluster_name>/openstack/compute.yml
, set
the level for volume_clear_ionice_level
as required:
nova:
compute:
lvm:
volume_clear_ionice_level: <level>
Possible <level>
values are as follows:
idle
- to use the idle
scheduling class. This option impacts
system performance the least with a downside of increased time
for a volume clearance.0
to 7
- to use the best-effort scheduling class.
Set the priority level to the specified number.Apply the changes:
salt -C 'I@nova:compute' state.sls nova.compute
Pike, Queens
Disabled the Telemetry notification queues in RabbitMQ for the OpenStack clusters with StackLight enabled and Telemetry disabled.
To apply the issue resolution:
Log in to the Salt Master node.
In classes/cluster/<cluster_name>/openstack/init.yml
, remove the
notifications
variable from the openstack_notification_topics
parameter leaving only the ${_param:stacklight_notification_topic}
variable:
openstack_notification_topics: "${_param:stacklight_notification_topic}"
Apply the changes:
salt "ctl*" state.sls keystone,glance,heat
salt -C "ctl* or cmp*" state.sls nova,neutron,cinder -b 20
Pike, Queens
Added support for the Ceph back end snapshotting mechanism to the Nova VM live snapshotting feature on the OpenStack environments with Ceph back end used for Nova.
To apply the issue resolution:
Log in to the Salt Master node.
In classes/cluster/<cluster_name>/openstack/control.yml
of your
Reclass model, add the following parameter:
glance:
server:
show_multiple_locations: True
In classes/cluster/<cluster_name>/openstack/compute/init.yml
,
add the following parameter:
nova:
compute:
workaround:
disable_libvirt_livesnapshot: False
Apply the changes:
salt -C 'I@glance:server' state.sls glance.server
salt -C 'I@nova:compute' state.sls nova.compute
Log in to the cmn01
node.
Define the rbd
permission for pools where images and VMs are stored:
ceph-authtool /etc/ceph/ceph.client.nova.keyring -n client.nova \
--cap osd 'profile rbd pool=vms, profile rbd pool=images' \
--cap mon 'allow r, allow command \"osd blacklist\"'
Substitute the vms
and images
values with the corresponding
pool names for Nova and Glance.
Apply the changes for Ceph:
ceph auth import -i /etc/ceph/ceph.client.nova.keyring
Pike, Queens
Fixed the issue with reaching the maximum limit of the
fs.inotify.max_user_instances
parameter value that prevented
an OpenStack compute node to be configured as a DHCP node. The fix increases
the default value to 4096
with the possibility to modify it
as required.
To apply the issue resolution:
Log in to the Salt Master node.
In classes/system/neutron/compute/cluster.yml
of your Reclass model, verify that the following snippet exists:
linux:
system:
kernel:
sysctl:
fs.inotify.max_user_instances: 4096
Apply the changes to the OpenStack compute nodes hosting DHCP:
salt 'cmp<node_number*>' state.apply linux.system.kernel
Pike
Fixed the issue with neutron-server
failing to reconnect to MySQL after a
crash of a MySQL server.
To apply the issue resolution:
Log in to the Salt Master node.
Apply the neutron
state on the OpenStack controller nodes:
salt -C 'I@neutron:server' state.sls neutron