Deploy the APT node

Deploy the APT nodeΒΆ

MCP enables you to deploy the whole MCP cluster without access to the Internet. On creating the metadata model, along with the configuration drive for the cfg01 VM, you will obtain a preconfigured QCOW2 image that will contain packages, Docker images, operating system images, Git repositories, and other software required specifically for the offline deployment.

This section describes how to deploy the apt01 VM using the prebuilt configuration drive.

Warning

Perform the procedure below only in case of an offline deployment or when using a local mirror from the prebuilt image.

To deploy the APT node:

  1. Verify that you completed steps described in Prerequisites for MCP DriveTrain deployment.

  2. Log in to the Foundation node.

    Note

    Root privileges are required for following steps. Execute the commands as a root user or use sudo.

  3. Download the latest version of the prebuilt http://images.mirantis.com/mcp-offline-image-<BUILD-ID>.qcow2 image for the apt node from http://images.mirantis.com.

  4. In the /var/lib/libvirt/images/ directory, create an apt01/ subdirectory where the offline mirror image will be stored:

    Note

    You can create and use a different subdirectory in /var/lib/libvirt/images/. If that is the case, verify that you specify the correct directory for the VM_*DISK variables described in next steps.

    mkdir -p /var/lib/libvirt/images/apt01/
    
  5. Save the image on the Foundation node as /var/lib/libvirt/images/apt01/system.qcow2.

  6. Copy the configuration ISO drive for the APT VM provided with the metadata model for the offline image to, for example, /var/lib/libvirt/images/apt01/.

    Caution

    By default, the prebuilt image does not have a possibility to log in to.

    Note

    If you are using an already existing model that does not have configuration drives, or you want to generate updated configuration drives, for example, with an unlocked root login for debugging purposes, proceed with Generate configuration drives manually.

    cp /path/to/prepared-drive/apt01-config.iso /var/lib/libvirt/images/apt01/apt01-config.iso
    
  7. Deploy the APT node:

    1. Download the shell script from GitHub:

      export MCP_VERSION="master"
      wget https://raw.githubusercontent.com/Mirantis/mcp-common-scripts/${MCP_VERSION}/predefine-vm/define-vm.sh
      
    2. Make the script executable, export the required variables:

      chmod +x define-vm.sh
      export VM_NAME="apt01.<CLUSTER_DOMAIN>"
      export VM_SOURCE_DISK="/var/lib/libvirt/images/apt01/system.qcow2"
      export VM_CONFIG_DISK="/var/lib/libvirt/images/apt01/apt01-config.iso"
      

      The CLUSTER_DOMAIN value is the cluster domain name used for the model. See Basic deployment parameters for details.

      Note

      You may add other optional variables that have default values and change them depending on your deployment configuration. These variables include:

      • VM_MGM_BRIDGE_NAME="br-mgm"
      • VM_CTL_BRIDGE_NAME="br-ctl"
      • VM_MEM_KB="12589056"
      • VM_CPUS="4"

      The recommended VM_MEM_KB for the Salt Master node is 12589056 (or more depending on your cluster size) that is 12 GB of RAM. For large clusters, you should also increase VM_CPUS.

      The recommended VM_MEM_KB for the local mirror node is 8388608 (or more) that is 8 GB of RAM.

      The br-mgm and br-ctl values are the names of the Linux bridges. See Prerequisites for MCP DriveTrain deployment for details. Custom names can be passed to a VM definition using the VM_MGM_BRIDGE_NAME and VM_CTL_BRIDGE_NAME variables accordingly.

    3. Run the shell script:

      ./define-vm.sh
      
  8. Start the apt01 VM:

    virsh start apt01.<CLUSTER_DOMAIN>
    
  9. For MCP versions prior to the 2019.2.14 maintenance update, perform the following additional steps:

    1. SSH to the apt01 node.

    2. Verify the certificate:

      openssl x509 -checkend 1 -in /var/lib/docker/swarm/certificates/swarm-node.crt
      

      If the certificate has expired, restart Docker Swarm to regenerate it:

      systemctl stop docker || true
      rm -rf /var/lib/docker/swarm/*
      systemctl restart docker
      sleep 5
      docker ps
      docker swarm init --advertise-addr 127.0.0.1
      sleep 5
      cd /etc/docker/compose/docker/
      docker stack deploy --compose-file docker-compose.yml docker
      sleep 5
      cd /etc/docker/compose/aptly/
      docker stack deploy --compose-file docker-compose.yml aptly
      sleep 5
      docker ps
      

After completing the steps above, you obtain the apt01 node that contains only the pre-built content. Now, you can proceed with Deploy the Salt Master node. Once you deploy the Salt Master node, you will be able to customize the content of the local mirror, as described in Customize the prebuilt mirror node.