This section describes how to deploy an Edge Cloud minimum viable product (MVP) based on the Kubernetes with Calico architecture together with Virtlet and the CNI Genie plugin that enables the Flannel CNI plugin support.
For demonstration purposes, you can also download a virtual appliance of MCP Edge. For details, see: MCP Edge.
Warning
Edge Cloud MVP is available as technical preview. Use such configurations for testing and evaluation purposes only.
To deploy Edge Cloud:
Provision three KVM nodes and three compute nodes based on Ubuntu Xenial.
Caution
During provisioning, disable swap on the target nodes, since this feature is not supported for Edge Cloud MVP.
Create bridges on the first KVM node as described in the step 3 of the Prerequisites for MCP DriveTrain deployment procedure.
Set an IP for br-mgm
.
Enable DHCP on the first interface of the br-mgm
network.
Create a deployment metadata model:
Navigate to the Model Designer web UI and click Create Model.
In the Version drop-down menu, select 2018.11.0 and click Continue.
In the General parameters section, set the parameters as required and change the below ones as follows:
In Public host, specify
${_param:kubernetes_proxy_address}
.
In Deployment type, select Physical.
In OpenSSH groups, specify lab,k8s_team.
In Platform, select Kubernetes.
Disable OpenСontrail, StackLight, Ceph, CICD, and OSS.
Enable Use default network scheme.
Enable Kubernetes Control on KVM.
Specify the deploy and control subnets.
In the Infrastructure parameters section:
Disable MAAS.
In Kubernetes Networking, select the following plugins:
Kubernetes network calico enabled
Kubernetes network flannel enabled
Kubernetes network genie enabled
Kubernetes metallb enabled
Set other parameters as required.
In the Product parameters section:
Specify the KVM hostnames and IP addresses. The KVM hosts
must have the hostnames kvm01, kvm02, kvm03
due to a limitation
in the Jenkins pipeline jobs.
Set the subnets for Calico and Flannel.
In Metallb addresses, specify the MetalLB public address pool.
Select Kubernetes virtlet enabled.
Select Kubernetes containerd enabled.
In Kubernetes compute count, specify 3
.
In Kubernetes keepalived vip interface, specify ens3.
In Kubernetes network scheme for master nodes, select Virtual - deploy interface + single control interface.
In Kubernetes network scheme for compute nodes, select the scheme as required.
Specify the names of the Kubernetes network interfaces and addresses.
Generate the model and obtain the ISO configuration drive from email received after you generated the deployment metadata model or from the Jenkins pipeline job artifacts.
Log in to the KVM node where the Salt Master node is deployed.
Download the ISO configuration drive obtained after completing the step 5 of this procedure.
Create and configure the Salt Master VM. For details, see: Deploy the Salt Master node.
Once the Salt Master node is up and running, set the salt-minion
configurations on each kvm
and cmp
node.
Warning
Due to a limitation in the Jenkins deployment pipeline job,
the kvm
nodes must have the minion IDs kvm01.domain
,
kvm02.domain
, kvm03.domain
with a proper domain.
Verify that all nodes are connected to the Salt Master node using the salt-key state.
Verify that all nodes are up and running:
salt '*' test.ping
In a web browser, open http://<ip address>:8081
to access the
Jenkins web UI.
Note
The IP address is defined in the
classes/cluster/<cluster_name>/cicd/init.yml
file
of the Reclass model under the cicd_control_address
parameter variable.
Log in to the Jenkins web UI as an admin.
Note
To obtain the password for the admin user, run the
salt "cid*" pillar.data _param:jenkins_admin_password
command
from the Salt Master node.
In the Deploy - OpenStack Jenkins pipeline job, define the STACK_INSTALL: core,kvm,k8s parameters.
Click Build.
See also