This section describes how to upgrade the MCP release version on your deployment from the Build ID 2018.11.0 to 2019.2.0.
To upgrade to MCP release version 2019.2.0:
Verify that you have completed the steps described in Prerequisites.
Log in to the Salt Master node.
From the /srv/salt/reclass/classes/cluster
directory of your Reclass
model, verify that the correct Salt formulas and OpenContrail repositories
are enabled in your deployment:
Note
Starting from the 2019.2.0 MCP version, the Salt formulas and
OpenContrail repositories are moved from http://apt.mirantis.com
to
http://mirror.mirantis.com
.
grep -r --exclude-dir=aptly -l 'system.linux.system.repo.mcp.salt'
If the matches are found, replace accordingly on the cluster Reclass level:
system.linux.system.repo.mcp.salt
replace with
system.linux.system.repo.mcp.apt_mirantis.salt-formulas
system.linux.system.repo.mcp.updates
replace with
system.linux.system.repo.mcp.apt_mirantis.update
system.linux.system.repo.mcp.contrail
replace with
system.linux.system.repo.mcp.apt_mirantis.contrail
system.linux.system.repo.mcp.extra
replace with
system.linux.system.repo.mcp.apt_mirantis.extra
Depending on your cluster configuration, add the update repositories
for the required components. The list of the update repositories include
cassandra
, ceph
, contrail
, docker
, elastic
, extra
,
kubernetes_extra
, openstack
, percona
, salt-formulas
,
saltstack
, and ubuntu
.
For example, to add the update repository for OpenStack:
Change the directory to /srv/salt/reclass/classes/cluster
.
Verify whether the OpenStack component is present in the model:
grep -r --exclude-dir=aptly -l 'system.linux.system.repo.mcp.apt_mirantis.openstack'
If matches are found, include the update repository in your Reclass model by editing the files that include these matches:
classes:
- system.linux.system.repo.mcp.apt_mirantis.openstack
- system.linux.system.repo.mcp.apt_mirantis.update.openstack
Open the project Git repository with your Reclass model on the cluster level.
In /infra/backup/client_mysql.yml
, verify that the following parameters
are defined:
parameters:
xtrabackup:
client:
cron: false
In /infra/backup/server.yml
, verify that the following parameters
are defined:
parameters:
xtrabackup:
server:
cron: false
# if ceph is enabled
ceph:
backup:
cron: false
If any physical node in your cluster has LVM physical volumes configured, for example, the root partition, define these volumes in your Reclass model.
You can verify where LVM is configured using the pvdisplay
or lvm pvs
command. Apply the following state from the Salt Master node:
salt '*' cmd.run 'lvm pvs'
For example, if one of your physical nodes has /dev/sda1
for a physical volume of a volume group vgroot
and a logical volume
lvroot
(/dev/vgroot/lvroot
) mounted as /
,
add the following pillar data for this node:
parameters:
linux:
storage:
lvm:
vgroot:
enabled: true
devices: /dev/sda1
For example, if all your compute nodes have LVM physical volume configured,
add the above pillar to /openstack/compute/init.yml
.
Warning
You must add the above pillar data for all nodes with all LVM volume groups configured. Otherwise, the LVM configuration will be updated improperly during the upgrade and a node will be unable to boot from the logical volume.
If OpenContrail 3.2 is used, verify that the following configurations are present in your Reclass model:
In the /infra/backup/client_zookeeper.yml
and
/infra/backup/server.yml
files:
parameters:
zookeeper:
backup:
cron: false
In the /infra/backup/client_cassandra.yml
and
/infra/backup/server.yml
files:
parameters:
cassandra:
backup:
cron: false
Caution
The OpenContrail 4.x update is covered in a separate procedure. For details, see: Update the OpenContrail 4.x nodes.
If OpenStack Telemetry is used, switch Redis to use password authentication:
Warning
During this procedure, a short Tenant Telemetry downtime occurs.
In /infra/secrets.yml
, add a password for Redis:
parameters:
_param:
openstack_telemetry_redis_password_generated: <very_strong_password>
Warning
Since the key feature of Redis is high performance, an attacker can try many passwords per second. Therefore, create a very strong password to prevent information leak.
In /openstack/init.yml
, add the following parameter:
parameters:
_param:
openstack_telemetry_redis_password: ${_param:openstack_telemetry_redis_password_generated}
In /openstack/telemetry.yml
, update the following definitions:
Update the openstack_telemetry_redis_url
parameter value.
For example:
parameters
_param:
openstack_telemetry_redis_url: redis://openstack:${_param:openstack_telemetry_redis_password}@${_param:redis_sentinel_node01_address}:26379?sentinel=master_1&sentinel_fallback=${_param:redis_sentinel_node02_address}:26379&sentinel_fallback=${_param:redis_sentinel_node03_address}:26379
Add the password
parameter to the following section:
redis:
cluster:
...
password: ${_param:openstack_telemetry_redis_password}
...
Refresh pillars:
salt 'mdb*' saltutil.pillar_refresh
Apply the changes:
For the Redis cluster:
Warning
After applying the Redis states, the Tenant Telemetry services will not be able to connect to Redis.
salt -C 'I@redis:cluster:role:master' state.sls redis
salt -C 'I@redis:server' state.sls redis
For the Tenant Telemetry components:
salt -C 'I@gnocchi:server' state.sls gnocchi.server
salt -C 'I@ceilometer:server' state.sls ceilometer.server
salt -C 'I@aodh:server' state.sls aodh.server
Note
Once you apply the Salt states above, Tenant Telemetry will become fully operational again.
In /cicd/control/leader.yml
, verify that the
following class is present:
classes:
- system.jenkins.client.job.deploy.update
Commit the changes to your local repository.
Log in to the Jenkins web UI.
Verify that you do not have any unapproved scripts in Jenkins:
Run the Deploy - upgrade MCP DriveTrain
pipeline in the Jenkins web UI
specifying the following parameters as required:
Parameter | Description |
---|---|
SALT_MASTER_URL |
Salt Master API URL string |
SALT_MASTER_CREDENTIALS |
Jenkins credentials to access the Salt Master API |
BATCH_SIZE Added since 2019.2.6 update |
The batch size for Salt commands targeted for a large amount of nodes. Set to an absolute number of nodes (integer) or percentage, for example, 20 or 20%. For details, see MCP Deployment Guide: Configure Salt Master threads and batching. |
OS_DIST_UPGRADE For updates starting from 2019.2.2 to
2019.2.8 or later |
Applicable only for maintenance updates starting from 2019.2.2 or
later to 2019.2.8 and later.
Set to Note Prior to the maintenance update 2019.2.8, manually specify
the parameter in the |
OS_UPGRADE For updates starting from 2019.2.2 to 2019.2.8
or later |
Applicable only for maintenance updates starting from 2019.2.2 or
later to 2019.2.8 and later.
Set to Note Prior to the maintenance update 2019.2.8, manually specify
the parameter in the |
APPLY_MODEL_WORKAROUNDS
For maintenance updates to 2019.2.8 or later |
Applicable only for maintenance updates to 2019.2.8 and later. Recommended. Select to apply the cluster model workarounds automatically unless you manually added some patches to the model before the update. |
UPGRADE_SALTSTACK |
Upgrade the SaltStack packages (salt-master , salt-api ,
salt-common ) on the Salt Master node and the salt-minion
package on all nodes. |
UPDATE_CLUSTER_MODEL |
Automatically apply the cluster model changes required for the target MCP version:
|
UPDATE_PIPELINES |
Automatically update the pipeline-library and mk-pipelines
repositories from upstream/local mirror into Gerrit |
UPDATE_LOCAL_REPOS Deprecated since MCP 2019.2.0 |
Update local repositories on the Aptly node if applicable. |
GIT_REFSPEC |
Git version of the Reclass system to use (branch or tag). Must match
TARGET_MCP_VERSION . |
MK_PIPELINES_REFSPEC |
Git version of mk/mk-pipelines to use (branch or tag). Must match
TARGET_MCP_VERSION . |
PIPELINE_TIMEOUT |
The time for the Jenkins job to complete, set to 12 hours by default. If the time is exceeded, the Jenkins job will be aborted. |
TARGET_MCP_VERSION |
Target version of MCP that will correspond to the
|
The pipeline workflow:
Optional. To upgrade system packages on the Salt Master node, select from the following options:
If you are upgrading DriveTrain to the latest major version, run the
following commands one by one from the cfg01
node:
apt-get update
apt-get dist-upgrade
If you are applying maintenance updates to a major MCP version, run the
following commands one by one from the cfg01
node:
apt-get update
apt-get upgrade
Proceed to Upgrade GlusterFS.