Upgrade to MCP release version 2019.2.0

Upgrade to MCP release version 2019.2.0

This section describes how to upgrade the MCP release version on your deployment from the Build ID 2018.11.0 to 2019.2.0.

To upgrade to MCP release version 2019.2.0:

  1. Verify that you have completed the steps described in Prerequisites.

  2. Log in to the Salt Master node.

  3. From the /srv/salt/reclass/classes/cluster directory of your Reclass model, verify that the correct Salt formulas and OpenContrail repositories are enabled in your deployment:

    Note

    Starting from the 2019.2.0 MCP version, the Salt formulas and OpenContrail repositories are moved from http://apt.mirantis.com to http://mirror.mirantis.com.

    grep -r --exclude-dir=aptly -l 'system.linux.system.repo.mcp.salt'
    

    If the matches are found, replace accordingly on the cluster Reclass level:

    • system.linux.system.repo.mcp.salt replace with system.linux.system.repo.mcp.apt_mirantis.salt-formulas
    • system.linux.system.repo.mcp.updates replace with system.linux.system.repo.mcp.apt_mirantis.update
    • system.linux.system.repo.mcp.contrail replace with system.linux.system.repo.mcp.apt_mirantis.contrail
    • system.linux.system.repo.mcp.extra replace with system.linux.system.repo.mcp.apt_mirantis.extra
  4. Depending on your cluster configuration, add the update repositories for the required components. The list of the update repositories include cassandra, ceph, contrail, docker, elastic, extra, kubernetes_extra, openstack, percona, salt-formulas, saltstack, and ubuntu.

    For example, to add the update repository for OpenStack:

    1. Change the directory to /srv/salt/reclass/classes/cluster.

    2. Verify whether the OpenStack component is present in the model:

      grep -r --exclude-dir=aptly -l 'system.linux.system.repo.mcp.apt_mirantis.openstack'
      
    3. If matches are found, include the update repository in your Reclass model by editing the files that include these matches:

      classes:
      - system.linux.system.repo.mcp.apt_mirantis.openstack
      - system.linux.system.repo.mcp.apt_mirantis.update.openstack
      
  5. Open the project Git repository with your Reclass model on the cluster level.

  6. In /infra/backup/client_mysql.yml, verify that the following parameters are defined:

    parameters:
      xtrabackup:
        client:
          cron: false
    
  7. In /infra/backup/server.yml, verify that the following parameters are defined:

    parameters:
      xtrabackup:
        server:
          cron: false
      # if ceph is enabled
      ceph:
        backup:
          cron: false
    
  8. If any physical node in your cluster has LVM physical volumes configured, for example, the root partition, define these volumes in your Reclass model.

    You can verify where LVM is configured using the pvdisplay or lvm pvs command. Apply the following state from the Salt Master node:

    salt '*' cmd.run 'lvm pvs'
    

    For example, if one of your physical nodes has /dev/sda1 for a physical volume of a volume group vgroot and a logical volume lvroot (/dev/vgroot/lvroot) mounted as /, add the following pillar data for this node:

    parameters:
      linux:
        storage:
          lvm:
            vgroot:
              enabled: true
              devices: /dev/sda1
    

    For example, if all your compute nodes have LVM physical volume configured, add the above pillar to /openstack/compute/init.yml.

    Warning

    You must add the above pillar data for all nodes with all LVM volume groups configured. Otherwise, the LVM configuration will be updated improperly during the upgrade and a node will be unable to boot from the logical volume.

  9. If OpenContrail 3.2 is used, verify that the following configurations are present in your Reclass model:

    • In the /infra/backup/client_zookeeper.yml and /infra/backup/server.yml files:

      parameters:
        zookeeper:
          backup:
            cron: false
      
    • In the /infra/backup/client_cassandra.yml and /infra/backup/server.yml files:

      parameters:
        cassandra:
          backup:
            cron: false
      

    Caution

    The OpenContrail 4.x update is covered in a separate procedure. For details, see: Update the OpenContrail 4.x nodes.

  10. If OpenStack Telemetry is used, switch Redis to use password authentication:

    Warning

    During this procedure, a short Tenant Telemetry downtime occurs.

    1. In /infra/secrets.yml, add a password for Redis:

      parameters:
        _param:
          openstack_telemetry_redis_password_generated: <very_strong_password>
      
      • The password can contain uppercase or lowercase letters from the latin alphabet (A-Z) and digits (0-9).
      • The recommended password length is 32 characters.

      Warning

      Since the key feature of Redis is high performance, an attacker can try many passwords per second. Therefore, create a very strong password to prevent information leak.

    2. In /openstack/init.yml, add the following parameter:

      parameters:
        _param:
          openstack_telemetry_redis_password: ${_param:openstack_telemetry_redis_password_generated}
      
    3. In /openstack/telemetry.yml, update the following definitions:

      • Update the openstack_telemetry_redis_url parameter value. For example:

        parameters
          _param:
            openstack_telemetry_redis_url: redis://openstack:${_param:openstack_telemetry_redis_password}@${_param:redis_sentinel_node01_address}:26379?sentinel=master_1&sentinel_fallback=${_param:redis_sentinel_node02_address}:26379&sentinel_fallback=${_param:redis_sentinel_node03_address}:26379
        
      • Add the password parameter to the following section:

        redis:
          cluster:
            ...
            password: ${_param:openstack_telemetry_redis_password}
            ...
        
    4. Refresh pillars:

      salt 'mdb*' saltutil.pillar_refresh
      
    5. Apply the changes:

      • For the Redis cluster:

        Warning

        After applying the Redis states, the Tenant Telemetry services will not be able to connect to Redis.

        salt -C 'I@redis:cluster:role:master' state.sls redis
        salt -C 'I@redis:server' state.sls redis
        
      • For the Tenant Telemetry components:

        salt -C 'I@gnocchi:server' state.sls gnocchi.server
        salt -C 'I@ceilometer:server' state.sls ceilometer.server
        salt -C 'I@aodh:server' state.sls aodh.server
        

      Note

      Once you apply the Salt states above, Tenant Telemetry will become fully operational again.

  11. In /cicd/control/leader.yml, verify that the following class is present:

    classes:
    - system.jenkins.client.job.deploy.update
    
  12. Commit the changes to your local repository.

  13. Log in to the Jenkins web UI.

  14. Verify that you do not have any unapproved scripts in Jenkins:

    1. Navigate to Manage Jenkins > In-process script approval.
    2. Approve pending scripts if any.
  15. Run the Deploy - upgrade MCP DriveTrain pipeline in the Jenkins web UI specifying the following parameters as required:

    Deploy - upgrade MCP DriveTrain pipeline parameters
    Parameter Description
    SALT_MASTER_URL Salt Master API URL string
    SALT_MASTER_CREDENTIALS Jenkins credentials to access the Salt Master API
    BATCH_SIZE Added since 2019.2.6 update The batch size for Salt commands targeted for a large amount of nodes. Set to an absolute number of nodes (integer) or percentage, for example, 20 or 20%. For details, see MCP Deployment Guide: Configure Salt Master threads and batching.
    OS_DIST_UPGRADE For updates starting from 2019.2.2 to 2019.2.8 or later

    Applicable only for maintenance updates starting from 2019.2.2 or later to 2019.2.8 and later. Set to true to upgrade the system packages including kernel using apt-get dist-upgrade on the cid* nodes.

    Note

    Prior to the maintenance update 2019.2.8, manually specify the parameter in the DRIVE_TRAIN_PARAMS field. For later versions, the parameter is predefined and set to false by default.

    OS_UPGRADE For updates starting from 2019.2.2 to 2019.2.8 or later

    Applicable only for maintenance updates starting from 2019.2.2 or later to 2019.2.8 and later. Set to true to upgrade all installed applications using apt-get upgrade on the cid* nodes.

    Note

    Prior to the maintenance update 2019.2.8, manually specify the parameter in the DRIVE_TRAIN_PARAMS field. For later versions, the parameter is predefined and set to false by default.

    APPLY_MODEL_WORKAROUNDS For maintenance updates to 2019.2.8 or later Applicable only for maintenance updates to 2019.2.8 and later. Recommended. Select to apply the cluster model workarounds automatically unless you manually added some patches to the model before the update.
    UPGRADE_SALTSTACK Upgrade the SaltStack packages (salt-master, salt-api, salt-common) on the Salt Master node and the salt-minion package on all nodes.
    UPDATE_CLUSTER_MODEL

    Automatically apply the cluster model changes required for the target MCP version:

    1. Replace the mcp_version parameter with TARGET_MCP_VERSION on the cluster level of Reclass model
    2. Apply other backward incompatible cluster model updates
    UPDATE_PIPELINES Automatically update the pipeline-library and mk-pipelines repositories from upstream/local mirror into Gerrit
    UPDATE_LOCAL_REPOS Deprecated since MCP 2019.2.0 Update local repositories on the Aptly node if applicable.
    GIT_REFSPEC Git version of the Reclass system to use (branch or tag). Must match TARGET_MCP_VERSION.
    MK_PIPELINES_REFSPEC Git version of mk/mk-pipelines to use (branch or tag). Must match TARGET_MCP_VERSION.
    PIPELINE_TIMEOUT The time for the Jenkins job to complete, set to 12 hours by default. If the time is exceeded, the Jenkins job will be aborted.
    TARGET_MCP_VERSION

    Target version of MCP that will correspond to the mcp_version parameter value in your Reclass model. Select from the following options:

    • If you upgrade DriveTrain to the latest major version, set the corresponding latest available Build ID.
    • To apply maintenance updates to a major MCP release version, select from the following options:
      • Specify your current MCP version. For example, if your current MCP version is 2019.2.0, set 2019.2.0.
      • Starting from 2019.2.8, alternatively, set the TARGET_MCP_VERSION, MK_PIPELINES_REFSPEC, and GIT_REFSPEC parameters to the target maintenance update version. For example, to update from 2019.2.7 to 2019.2.8, specify 2019.2.8.

    The pipeline workflow:

    1. All required updates and fixes for upgrade are applied.
    2. Packages for Salt formulas and Reclass are updated, cluster model is updated.
    3. The following DriveTrain components are updated:
      • Local repositories if needed
      • Salt Master and minions
      • Jenkins
      • Docker
  16. Optional. To upgrade system packages on the Salt Master node, select from the following options:

    • If you are upgrading DriveTrain to the latest major version, run the following commands one by one from the cfg01 node:

      apt-get update
      apt-get dist-upgrade
      
    • If you are applying maintenance updates to a major MCP version, run the following commands one by one from the cfg01 node:

      apt-get update
      apt-get upgrade
      
  17. Proceed to Upgrade GlusterFS.