Deploy Edge Cloud MVP

Deploy Edge Cloud MVP¶

This section describes how to deploy an Edge Cloud minimum viable product (MVP) based on the Kubernetes with Calico architecture together with Virtlet and the CNI Genie plugin that enables the Flannel CNI plugin support.

For demonstration purposes, you can also download a virtual appliance of MCP Edge. For details, see: MCP Edge.

Warning

Edge Cloud MVP is available as technical preview. Use such configurations for testing and evaluation purposes only.

To deploy Edge Cloud:

  1. Provision three KVM nodes and three compute nodes based on Ubuntu Xenial.

    Caution

    During provisioning, disable swap on the target nodes, since this feature is not supported for Edge Cloud MVP.

  2. Create bridges on the first KVM node as described in the step 3 of the Prerequisites for MCP DriveTrain deployment procedure.

  3. Set an IP for br-mgm.

  4. Enable DHCP on the first interface of the br-mgm network.

  5. Create a deployment metadata model:

    1. Navigate to the Model Designer web UI and click Create Model.

    2. In the Version drop-down menu, select 2018.11.0 and click Continue.

    3. In the General parameters section, set the parameters as required and change the below ones as follows:

      1. In Public host, specify ${_param:kubernetes_proxy_address}.

      2. In Deployment type, select Physical.

      3. In OpenSSH groups, specify lab,k8s_team.

      4. In Platform, select Kubernetes.

      5. Disable Open–°ontrail, StackLight, Ceph, CICD, and OSS.

      6. Enable Use default network scheme.

      7. Enable Kubernetes Control on KVM.

      8. Specify the deploy and control subnets.

    4. In the Infrastructure parameters section:

      1. Disable MAAS.

      2. In Kubernetes Networking, select the following plugins:

        • Kubernetes network calico enabled

        • Kubernetes network flannel enabled

        • Kubernetes network genie enabled

        • Kubernetes metallb enabled

      3. Set other parameters as required.

    5. In the Product parameters section:

      1. Specify the KVM hostnames and IP addresses. The KVM hosts must have the hostnames kvm01, kvm02, kvm03 due to a limitation in the Jenkins pipeline jobs.

      2. Set the subnets for Calico and Flannel.

      3. In Metallb addresses, specify the MetalLB public address pool.

      4. Select Kubernetes virtlet enabled.

      5. Select Kubernetes containerd enabled.

      6. In Kubernetes compute count, specify 3.

      7. In Kubernetes keepalived vip interface, specify ens3.

      8. In Kubernetes network scheme for master nodes, select Virtual - deploy interface + single control interface.

      9. In Kubernetes network scheme for compute nodes, select the scheme as required.

      10. Specify the names of the Kubernetes network interfaces and addresses.

    6. Generate the model and obtain the ISO configuration drive from email received after you generated the deployment metadata model or from the Jenkins pipeline job artifacts.

  6. Log in to the KVM node where the Salt Master node is deployed.

  7. Download the ISO configuration drive obtained after completing the step 5 of this procedure.

  8. Create and configure the Salt Master VM. For details, see: Deploy the Salt Master node.

  9. Once the Salt Master node is up and running, set the salt-minion configurations on each kvm and cmp node.

    Warning

    Due to a limitation in the Jenkins deployment pipeline job, the kvm nodes must have the minion IDs kvm01.domain, kvm02.domain, kvm03.domain with a proper domain.

  10. Verify that all nodes are connected to the Salt Master node using the salt-key state.

  11. Verify that all nodes are up and running:

    salt '*' test.ping
    
  12. In a web browser, open http://<ip address>:8081 to access the Jenkins web UI.

    Note

    The IP address is defined in the classes/cluster/<cluster_name>/cicd/init.yml file of the Reclass model under the cicd_control_address parameter variable.

  13. Log in to the Jenkins web UI as an admin.

    Note

    To obtain the password for the admin user, run the salt "cid*" pillar.data _param:jenkins_admin_password command from the Salt Master node.

  14. In the Deploy - OpenStack Jenkins pipeline job, define the STACK_INSTALL: core,kvm,k8s parameters.

  15. Click Build.