Perform the pre-upgrade activities

Perform the pre-upgrade activities

The pre-upgrade stage includes the activities that do not affect workability of a currently running OpenStack version as well as the backups creation.

To prepare your OpenStack deployment for the upgrade:

  1. Perform the steps described in Configure load balancing for Horizon.

  2. On one of the controller nodes, perform the online database migrations for the following services:

    Note

    The database migrations can be time-consuming and create high load on CPU and RAM. We recommend that you perform the migrations in batches.

    • Nova:

      nova-manage db online_data_migrations
      
    • Cinder:

      cinder-manage db online_data_migrations
      
    • Ironic:

      ironic-dbsync online_data_migrations
      
  3. Prepare the target nodes for the upgrade.

    Note

    On the clouds of medium and large sizes, you may want to automate the steps below to prepare the target nodes for the upgrade. Use the following script as an example of possible automatization.

    #!/bin/bash
    #List of formulas that implements upgrade API sorted by priority
    all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
    jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
    sed -e 's/"//g' -e 's/,//g' -e 's/[//g' -e 's/]//g')
    #List of nodes in cloud
    list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
    for node in $list_nodes; do
      #List of applications on the given node
      node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
      jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
      for component in $all_formulas ; do
        if [[ " ${node_applications[*]} " == *"$component"* ]]; then
          salt $node state.apply $component.upgrade.pre
          salt $node state.apply $component.upgrade.verify
        fi
      done
    done
    
    1. Log in to the Salt Master node.

    2. Verify that the OpenStack cloud configuration file is present on the controller nodes:

      salt -C 'I@keystone:client:os_client_config' state.apply keystone.client.os_client_config
      
    3. Get the list of all upgradable OpenStack components:

      salt-call config.get orchestration:upgrade:applications --out=json
      

      Example of system response:

      {
          "<model_of_salt_master_name>": {
              "nova": {
                  "priority": 1100
              },
              "heat": {
                  "priority": 1250
              },
              "keystone": {
                  "priority": 1000
              },
              "horizon": {
                  "priority": 1800
              },
              "cinder": {
                  "priority": 1200
              },
              "glance": {
                  "priority": 1050
              },
              "neutron": {
                  "priority": 1150
              },
              "designate": {
                  "priority": 1300
              }
          }
      }
      
    4. Range the components from the output by priority. For example:

      keystone
      glance
      nova
      neutron
      cinder
      heat
      designate
      horizon
      
    5. Get the list of all target nodes:

      salt-key | grep $cluster_domain | \
      grep -v $salt_master_hostname | tr '\n' ' '
      

      The cluster_domain variable stands for the name of the domain used as part of the cluster FQDN. For details, see MCP Deployment guide: General deployment parameters: Basic deployment parameters

      The salt_master_hostname variable stands for the hostname of the Salt Master node and is cfg01 by default. For details, see MCP Deployment guide: Infrastructure related parameters: Salt Master

    6. For each target node, get the list of the installed applications:

      salt <node_name> pillar.items __reclass__:applications --out=json
      
    7. Match the lists of upgradable OpenStack components with the lists of installed applications for each target node.

    8. If the public endpoint for the Nova placement API was not created before:

      1. Add the following class to the Reclass model in the classes/cluster/<cluster_name>/openstack/proxy.yml file:

        classes:
        
        - system.nginx.server.proxy.openstack.placement
        
      2. Refresh pillars on the proxy nodes:

        salt 'prx*' saltutil.refresh_pillar
        
      3. Apply the nginx state on the proxy nodes:

        salt 'prx*' state.sls nginx
        
    9. Apply the following states to each target node for each installed application in strict order of priority:

      Warning

      During upgrade, the applications running on the target nodes use the KeystoneRC metadata. To guarantee that the KeystoneRC metadata is exported to mine, verify that you apply the keystone.upgrade.pre formula to the keystone:client:enabled node:

      salt -C 'I@keystone:client:enabled' state.sls keystone.upgrade.pre
      
      salt <node_name> state.apply <component_name>.upgrade.pre
      salt <node_name> state.apply <component_name>.upgrade.verify
      

      For example, for Nova installed on the cmp01 compute node, run:

      salt cmp01 state.apply nova.upgrade.pre
      salt cmp01 state.apply nova.upgrade.verify
      
  4. Add the testing workloads to each compute host and monitoring and verify the following:

    • The cloud services are monitored as expected.

    • There are free resources (disk, RAM, CPU) on the kvm, ctl, cmp, and other nodes.

  5. Back up the OpenStack databases as described in Back up and restore a MySQL database.

  6. If Octavia is enabled, move the Octavia certificates from the gtw01 to the Salt Master node.

  7. Adjust the cluster model for the upgrade:

    1. Include the upgrade pipeline job to DriveTrain:

      1. Add the following lines to classes/cluster/<cluster_name>/cicd/control/leader.yml:

        Caution

        If your MCP OpenStack deployment includes the OpenContrail component, do not specify the system.jenkins.client.job.deploy.update.upgrade_ovs_gateway class.

        classes:
         - system.jenkins.client.job.deploy.update.upgrade
         - system.jenkins.client.job.deploy.update.upgrade_ovs_gateway
         - system.jenkins.client.job.deploy.update.upgrade_compute
        
      2. Apply the jenkins.client state on the Jenkins nodes:

        salt -C 'I@jenkins:client' state.sls jenkins.client
        
    2. Set the parameters in classes/cluster/<cluster_name>/infra/init.yml as follows:

      parameters:
        _param:
          openstack_version: queens
          openstack_old_version: pike
          openstack_upgrade_enabled: true
      
    3. (Optional) To upgrade Gnocchi to the version supported in Queens, modify the classes/cluster/<cluster_name>/openstack/init.yml file:

      1. Define the following parameters:

        parameters:
          _param:
            gnocchi_version: 4.2
            gnocchi_old_version: 4.0
        
      2. Update the Redis server version from 3.0 to 5.0:

        parameters:
          redis:
            server:
              version: 5.0
        
    4. (Optional) The upgrade pillars of all supported OpenStack applications are already included to the system level of Reclass. In case of a non-standard setup, check the list of OpenStack applications on each node and add the upgrade pillars to the OpenStack applications that do not contain them. For example:

      <app>:
        upgrade:
          enabled: ${_param:openstack_upgrade_enabled}
          old_release: ${_param:openstack_old_version}
          new_release: ${_param:openstack_version}
      

      Note

      To obtain the list of the OpenStack applications running on a node, use the following script.

      #!/bin/bash
      #List of formulas that implements upgrade API sorted by priority
      all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
      jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority": $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
      sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
      #List of nodes in cloud
      list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
      for node in $list_nodes; do
        #List of applications on the given node
        node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
        jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
        node_openstack_app=" "
        for component in $all_formulas ; do
          if [[ " ${node_applications[*]} " == *"$component"* ]]; then
            node_openstack_app="$node_openstack_app $component"
          fi
        done
        echo "$node : $node_openstack_app"
      done
      
    5. Enable the Keystone v3 client configuration for the Keystone resources creation by editing classes/cluster/<cluster_name>/openstack/control/init.yml:

      classes:
      - system.keystone.client.v3
      
    6. Refresh pillars:

      salt '*' saltutil.refresh_pillar
      
    7. Apply the salt.minion state:

      salt '*' state.apply salt.minion
      
  8. Prepare the target nodes for the upgrade:

    1. Get the list of all upgradable OpenStack components:

      salt-call config.get orchestration:upgrade:applications --out=json
      
    2. Range the components from the output by priority.

    3. Get the list of all target nodes:

      salt-key | grep $cluster_domain | \
      grep -v $salt_master_hostname | tr '\n' ' '
      

      The cluster_domain variable stands for the name of the domain used as part of the cluster FQDN. For details, see MCP Deployment guide: General deployment parameters: Basic deployment parameters.

      The salt_master_hostname variable stands for the hostname of the Salt Master node and is cfg01 by default. For details, see MCP Deployment guide: Infrastructure related parameters: Salt Master.

    4. For each target node, get the list of installed applications:

      salt <node_name> pillar.items __reclass__:applications --out=json
      
    5. Match the lists of upgradable OpenStack components with the lists of installed applications for each target node.

    6. Apply the following states to each target node for each installed application in strict order of priority:

      salt <node_name> state.apply <component_name>.upgrade.pre
      

    Note

    On the clouds of medium and large sizes, you may want to automate this step. Use the following script as an example of possible automatization.

     #!/bin/bash
    #List of formulas that implements upgrade API
    all_formulas=$(salt-call config.get orchestration:upgrade:applications --out=json | \
    jq '.[] | . as $in | keys_unsorted | map ({"key": ., "priority" :       $in[.].priority}) | sort_by(.priority) | map(.key | [(.)]) | add' | \
    sed -e 's/"//g' -e 's/,//g' -e 's/\[//g' -e 's/\]//g')
    #List of nodes in cloud
    list_nodes=`salt -C 'I@__reclass__:applications' test.ping --out=text | cut -d: -f1 | tr '\n' ' '`
    for node in $list_nodes; do
      #List of applications on the given node
      node_applications=$(salt $node pillar.items __reclass__:applications --out=json | \
      jq 'values |.[] | values |.[] | .[]' | tr -d '"' | tr '\n' ' ')
      for component in $all_formulas ; do
        if [[ " ${node_applications[*]} " == *"$component"* ]]; then
          salt $node state.apply $component.upgrade.pre
        fi
      done
    done
    
  9. Apply the linux.system.repo state on the target nodes.

  10. Proceed to Upgrade the OpenStack control plane.