The pre-update stage includes the activities that do not affect workability of a currently running OpenStack version as well as the backups creation.
To prepare your OpenStack deployment for the update:
(Optional) On one of the controller nodes, perform the online database migrations for the following services:
Note
The database migrations can be time-consuming and create high load on CPU and RAM. We recommend that you perform the migrations in batches.
Nova:
nova-manage db online_data_migrations
Cinder:
cinder-manage db online_data_migrations
Ironic:
ironic-dbsync online_data_migrations
Prepare the target nodes for the update:
Log in to the Salt Master node.
Get the list of all updatable OpenStack components:
salt-call config.get orchestration:upgrade:applications --out=json
Example of system response:
{
"<model_of_salt_master_name>": {
"nova": {
"priority": 1100
},
"heat": {
"priority": 1250
},
"keystone": {
"priority": 1000
},
"horizon": {
"priority": 1800
},
"cinder": {
"priority": 1200
},
"glance": {
"priority": 1050
},
"neutron": {
"priority": 1150
},
"designate": {
"priority": 1300
}
}
}
Range the components from the output by priority. For example:
keystone
glance
nova
neutron
cinder
heat
designate
horizon
Get the list of all target nodes:
salt-key | grep $cluster_domain | \
grep -v $salt_master_hostname | tr '\n' ' '
The cluster_domain
variable stands for the name of the
domain used as part of the cluster FQDN.
For details, see
MCP Deployment guide: General deployment parameters:
Basic deployment parameters.
The salt_master_hostname
variable stands for the hostname of
the Salt Master node and is cfg01
by default. For details, see
MCP Deployment guide: Infrastructure related parameters:
Salt Master.
For each target node, get the list of installed applications:
salt <node_name> pillar.items __reclass__:applications --out=json
Match the lists of updatable OpenStack components with the lists of installed applications for each target node.
During update, the applications running on the target nodes use the
KeystoneRC metadata. To guarantee that the KeystoneRC metadata is
exported to mine, verify that you apply the
keystone.upgrade.pre formula to the
keystone:client:enabled
node:
salt -C 'I@keystone:client:enabled' state.sls keystone.upgrade.pre
Apply the following states to each target node for each installed application in the strict order of priority:
salt <node_name> state.apply <component_name>.upgrade.pre
salt <node_name> state.apply <component_name>.upgrade.verify
For example, for Nova installed on the cmp01
compute node, run:
salt cmp01 state.apply nova.upgrade.pre
salt cmp01 state.apply nova.upgrade.verify
Note
On the clouds of medium and large sizes, you may want to automate this step. Use the following script as an example of possible automatization.
#!/bin/bash
#List of formulas that implements upgrade API sorted by priority
all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
#List of nodes in cloud
list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
for node in $list_nodes; do
#List of applications on the given node
node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
for component in $all_formulas ; do
if [[ " ${node_applications[*]} " == *"$component"* ]]; then
salt $node state.apply $component.upgrade.pre
salt $node state.apply $component.upgrade.verify
fi
done
done
Add the testing workloads to each compute host and monitoring and verify the following:
kvm
, ctl
,
cmp
, and other nodes.(Optional) Back up the OpenStack databases as described in Back up and restore a MySQL database.
Adjust the cluster model:
Include the upgrade pipeline job to DriveTrain:
Add the following lines to cluster/cicd/control/leader.yml
:
Caution
If your MCP OpenStack deployment includes the
OpenContrail component, do not specify the
system.jenkins.client.job.deploy.update.upgrade_ovs_gateway
class.
classes:
- system.jenkins.client.job.deploy.update.upgrade
- system.jenkins.client.job.deploy.update.upgrade_ovs_gateway
- system.jenkins.client.job.deploy.update.upgrade_compute
Apply the jenkins.client state on the Jenkins nodes:
salt -C 'I@jenkins:client' state.sls jenkins.client
Set the parameters in classes/cluster/<cluster_name>/infra/init.yml
as follows:
parameters:
_param:
openstack_upgrade_enabled: true
(Optional) Upgrade pillars of all supported OpenStack applications are already included in the Reclass system level. In case of a non-standard setup, the list of the OpenStack applications on each node should be checked and upgrade pillars added for the Openstack applications that do not contain them. For example:
<app>:
upgrade:
enabled: ${_param:openstack_upgrade_enabled}
Note
On the clouds of medium and large sizes, you may want to automate this step. To obtain the list of the OpenStack applications running on a node, use the following script.
#!/bin/bash
#List of formulas that implements upgrade API sorted by priority
all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
#List of nodes in cloud
list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
for node in $list_nodes; do
#List of applications on the given node
node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
node_openstack_app=" "
for component in $all_formulas ; do
if [[ " ${node_applications[*]} " == *"$component"* ]]; then
node_openstack_app="$node_openstack_app $component"
fi
done
echo "$node : $node_openstack_app"
done
Refresh pillars:
salt '*' saltutil.refresh_pillar
Prepare the target nodes for the update:
Get the list of all updatable OpenStack components:
salt-call config.get orchestration:upgrade:applications --out=json
Range the components from the output by priority.
Get the list of all target nodes:
salt-key | grep $cluster_domain | \
grep -v $salt_master_hostname | tr '\n' ' '
The cluster_domain
variable stands for the name of the
domain used as part of the cluster FQDN.
For details, see
MCP Deployment guide: General deployment parameters:
Basic deployment parameters
The salt_master_hostname
variable stands for the hostname of
the Salt Master node and is cfg01
by default. For details, see
MCP Deployment guide: Infrastructure related parameters:
Salt Master
For each target node, get the list of installed applications:
salt <node_name> pillar.items __reclass__:applications --out=json
Match the lists of updatable OpenStack components with the lists of installed applications for each target node.
Apply the following states to each target node for each installed application in strict order of priority:
salt <node_name> state.apply <component_name>.upgrade.pre
Note
On the clouds of medium and large sizes, you may want to automate this step. Use the following script as an example of possible automatization.
#!/bin/bash
#List of formulas that implements upgrade API
all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
for node in $list_nodes; do
#List of applications on the given node
node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
for component in $all_formulas ; do
if [[ " ${node_applications[*]} " == *"$component"* ]]; then
salt $node state.apply $component.upgrade.pre
fi
done
done
Apply the linux.system.repo
state on the target nodes:
salt <node_name> state.apply linux.system.repo
Proceed to Update the OpenStack control plane.