Note
Before proceeding with the manual steps below, verify that you have performed the steps described in Apply maintenance updates.
Pike, Queens
Fixed the issue when systemd did not restart the apache2
daemon after its
unexpected exit. To apply the fix, Apache should be upgraded.
The resolution applies automatically when you select the
OS_UPGRADE
or OS_DIST_UPGRADE
chack boxes when running
the Deploy - upgrade control VMs Jenkins pipeline.
To verify that the fix has been applied correctly:
After the Apache2 packages update, verify that the apache2
service is
running:
systemctl status apache2
Verify that the process ID is not changing:
pgrep apache2
Pike
Implemented the ability to set the ionice level for the ephemeral LVM volume
shred
operation in nova-compute
to prevent excessive disk
consumption. Setting of the ionice level described below makes sense if:
nova:compute:lvm:ephemeral
is set to True
nova:compute:lvm:volume_clear
is set to zero
or shred
To apply the issue resolution:
Log in to the Salt Master node.
In classes/cluster/<cluster_name>/openstack/compute.yml
, set
the level for volume_clear_ionice_level
as required:
nova:
compute:
lvm:
volume_clear_ionice_level: <level>
Possible <level>
values are as follows:
idle
- to use the idle
scheduling class. This option impacts
system performance the least with a downside of increased time
for a volume clearance.0
to 7
- to use the best-effort scheduling class.
Set the priority level to the specified number.Apply the changes:
salt -C 'I@nova:compute' state.sls nova.compute
Fixed the issue that caused the failure during the creation of a large Heat stack. The issue was caused by the HAProxy timeout of 60 seconds. Now, the default timeout value is 2 minutes.
To apply the issue resolution, apply the haproxy state on the OpenStack controller nodes.
If you have changed the default timeout value on your deployment before the update, it will remain unchanged.
To tune the timeout parameter depending on the needs of an already deployed environment:
Log in to the Salt Master node.
In /srv/salt/reclass/nodes/_generated/ctl01.<cluster_name>.local.yml
,
set the timeout parameter as required. For example:
parameters:
haproxy:
proxy:
listen:
heat_api:
timeout:
client: '90s'
server: '3m'
Apply the change:
salt -C 'I@haproxy:proxy:listen:heat_api' state.sls haproxy