The pre-upgrade stage includes the activities that do not affect workability of a currently running OpenStack version as well as the backups creation.
To prepare your OpenStack deployment for the upgrade:
Perform the steps described in Configure load balancing for Horizon.
On one of the controller nodes, perform the online database migrations for the following services:
Note
The database migrations can be time-consuming and create high load on CPU and RAM. We recommend that you perform the migrations in batches.
Nova:
nova-manage db online_data_migrations
Cinder:
cinder-manage db online_data_migrations
Ironic:
ironic-dbsync online_data_migrations
Prepare the target nodes for the upgrade:
Log in to the Salt Master node.
Verify that the OpenStack cloud configuration file is present on the controller nodes:
salt -C 'I@keystone:client:os_client_config' state.apply keystone.client.os_client_config
Get the list of all upgradable OpenStack components:
salt-call config.get orchestration:upgrade:applications --out=json
Example of system response:
{
"<model_of_salt_master_name>": {
"nova": {
"priority": 1100
},
"heat": {
"priority": 1250
},
"keystone": {
"priority": 1000
},
"horizon": {
"priority": 1800
},
"cinder": {
"priority": 1200
},
"glance": {
"priority": 1050
},
"neutron": {
"priority": 1150
},
"designate": {
"priority": 1300
}
}
}
Range the components from the output by priority. For example:
keystone
glance
nova
neutron
cinder
heat
designate
horizon
Get the list of all target nodes:
salt-key | grep $cluster_domain | \
grep -v $salt_master_hostname | tr '\n' ' '
The cluster_domain
variable stands for the name of the
domain used as part of the cluster FQDN.
For details, see
MCP Deployment guide: General deployment parameters:
Basic deployment parameters.
The salt_master_hostname
variable stands for the hostname of
the Salt Master node and is cfg01
by default. For details, see
MCP Deployment guide: Infrastructure related parameters:
Salt Master.
For each target node, get the list of the installed applications:
salt <node_name> pillar.items __reclass__:applications --out=json
Verify that the outdated version of the nova-osapi_compute
service
is not running:
salt -C 'I@galera:master' mysql.query nova 'select services.id, services.host, \
services.binary, services.version from services where services.version < 15'
If the system output contains the nova-osapi_compute
service,
delete it by running the following commands
from any OpenStack controller node:
source keystonercv3
openstack compute service delete <nova-osapi_compute_service_id>
Match the lists of upgradable OpenStack components with the lists of installed applications for each target node.
Apply the following states to each target node for each installed application in strict order of priority:
Warning
During upgrade, the applications running on the target
nodes use the KeystoneRC metadata. To guarantee that the KeystoneRC
metadata is exported to mine, verify that you apply the
keystone.upgrade.pre formula to the
keystone:client:enabled
node:
salt -C 'I@keystone:client:enabled' state.sls keystone.upgrade.pre
salt <node_name> state.apply <component_name>.upgrade.pre
salt <node_name> state.apply <component_name>.upgrade.verify
For example, for Nova installed on the cmp01
compute node, run:
salt cmp01 state.apply nova.upgrade.pre
salt cmp01 state.apply nova.upgrade.verify
On the clouds of medium and large sizes, you may want to automate the step 3 to prepare the target nodes for the upgrade. Use the following script as an example of possible automatization.
#!/bin/bash
#List of formulas that implements upgrade API sorted by priority
all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | \
sort_by(.priority) | map(.key | [(.)]) | add' | \
sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
#List of nodes in cloud
list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
for node in $list_nodes; do
#List of applications on the given node
node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
for component in $all_formulas ; do
if [[ " ${node_applications[*]} " == *"$component"* ]]; then
salt $node state.apply $component.upgrade.pre
salt $node state.apply $component.upgrade.verify
fi
done
done
Add the testing workloads to each compute host and monitoring and verify the following:
kvm
, ctl
,
cmp
, and other nodes.Back up the OpenStack databases as described in Back up and restore a MySQL database.
Adjust the cluster model:
Include the upgrade pipeline job to DriveTrain:
Add the following lines to
classes/cluster/<cluster_name>/cicd/control/leader.yml
:
Caution
If your MCP OpenStack deployment includes the
OpenContrail component, do not specify the
system.jenkins.client.job.deploy.update.upgrade_ovs_gateway
class.
classes:
- system.jenkins.client.job.deploy.update.upgrade
- system.jenkins.client.job.deploy.update.upgrade_ovs_gateway
- system.jenkins.client.job.deploy.update.upgrade_compute
Apply the jenkins.client state on the Jenkins nodes:
salt -C 'I@jenkins:client' state.sls jenkins.client
Set the parameters in classes/cluster/<cluster_name>/infra/init.yml
as follows:
parameters:
_param:
openstack_version: pike
openstack_old_version: ocata
openstack_upgrade_enabled: true
(Optional) Upgrade pillars of all supported OpenStack applications are already included in the Reclass system level. In case of a non-standard setup, the list of the OpenStack applications on each node should be checked and upgrade pillars added for the Openstack applications that do not contain them. For example:
<app>:
upgrade:
enabled: ${_param:openstack_upgrade_enabled}
old_release: ${_param:openstack_old_version}
new_release: ${_param:openstack_version}
Note
On the clouds of medium and large sizes, you may want to automate this step. To obtain the list of the OpenStack applications running on a node, use the following script.
#!/bin/bash
#List of formulas that implements upgrade API sorted by priority
all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
#List of nodes in cloud
list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
for node in $list_nodes; do
#List of applications on the given node
node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
node_openstack_app=" "
for component in $all_formulas ; do
if [[ " ${node_applications[*]} " == *"$component"* ]]; then
node_openstack_app="$node_openstack_app $component"
fi
done
echo "$node : $node_openstack_app"
done
Refresh pillars:
salt '*' saltutil.refresh_pillar
Prepare the target nodes for the upgrade:
Get the list of all upgradable OpenStack components:
salt-call config.get orchestration:upgrade:applications --out=json
Range the components from the output by priority.
Get the list of all target nodes:
salt-key | grep $cluster_domain | \
grep -v $salt_master_hostname | tr '\n' ' '
The cluster_domain
variable stands for the name of the
domain used as part of the cluster FQDN.
For details, see
MCP Deployment guide: General deployment parameters:
Basic deployment parameters
The salt_master_hostname
variable stands for the hostname of
the Salt Master node and is cfg01
by default. For details, see
MCP Deployment guide: Infrastructure related parameters:
Salt Master
For each target node, get the list of installed applications:
salt <node_name> pillar.items __reclass__:applications --out=json
Match the lists of upgradable OpenStack components with the lists of installed applications for each target node.
Apply the following states to each target node for each installed application in strict order of priority:
salt <node_name> state.apply <component_name>.upgrade.pre
Note
On the clouds of medium and large sizes, you may want to automate this step. Use the following script as an example of possible automatization.
#!/bin/bash
#List of formulas that implements upgrade API
all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
#List of nodes in cloud
list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
for node in $list_nodes; do
#List of applications on the given node
node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
for component in $all_formulas ; do
if [[ " ${node_applications[*]} " == *"$component"* ]]; then
salt $node state.apply $component.upgrade.pre
fi
done
done
Apply the linux.system.repo
state on the target nodes.
Proceed to Upgrade the OpenStack control plane.