Deploy a VMware vSphere-based regional cluster¶
You can deploy an additional regional VMware vSphere-based cluster to create managed clusters of several provider types or with different configurations.
To deploy a vSphere-based regional cluster:
Log in to the node where you bootstrapped a management cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \ --target-dir <pathToBootstrapDirectory>
For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the
kaas-bootstrapfolder, download and run the Container Cloud bootstrap script:
wget https://binary.mirantis.com/releases/get_container_cloud.sh chmod 0755 get_container_cloud.sh ./get_container_cloud.sh
Verify access to the target vSphere cluster from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \ curl https://vsphere.server.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
Prepare deployment templates:
Contact your vSphere administrator to provide you with the values for the below parameters.
IP address or FQDN of the vCenter Server.
Port of the vCenter Server. For example,
port: "8443". Leave empty to use
vSphere data center name.
Flag that controls validation of the vSphere Server certificate. Must be
vSphere Cluster API provider user name that you added when preparing the deployment user setup and permissions.
vSphere Cluster API provider user password.
vSphere Cloud Provider deployment user name that you added when preparing the deployment user setup and permissions.
vSphere Cloud Provider deployment user password.
vSphere cluster network parameters
Modify the following required network parameters:
IP address from the provided vSphere network for Kubernetes API load balancer (Keepalived VIP).
Name of the vSphere datastore. You can use different datastores for vSphere Cluster API and vSphere Cloud Provider.
Path to a folder where the cluster machines metadata will be stored.
Path to a network for cluster machines.
Path to a resource pool in which VMs will be created.
To obtain the
LB_HOSTparameter for the selected vSphere network, contact your vSphere administrator who provides you with the IP ranges dedicated to your environment.
Modify other parameters if required. For example, add the corresponding values for
For either DHCP or non-DHCP vSphere network:
Determine the vSphere network parameters as described in VMware vSphere network objects and IPAM recommendations.
Provide the following additional parameters for a proper network setup on machines using embedded IP address management (IPAM) in
To obtain IPAM parameters for the selected vSphere network, contact your vSphere administrator who provides you with IP ranges dedicated to your environment only.
Enables IPAM. Recommended value is
truefor either DHCP or non-DHCP networks.
CIDR of the provided vSphere network. For example,
Gateway of the provided vSphere network.
IP range for the cluster machines. Specify the range of the provided CIDR. For example,
10.20.0.100-10.20.0.200. If the DHCP network is used, this range must not intersect with the DHCP range of the network.
Optional. IP ranges to be excluded from being assigned to the cluster machines. The MetalLB range and
SET_LB_HOSTshould not intersect with the addresses for IPAM. For example,
List of nameservers for the provided vSphere network.
Configure MetalLB parameters:
Open the required configuration file for editing:
templates/vsphere/metallbconfig.yaml.template. For a detailed
MetalLBConfigobject description, see API Reference: MetalLBConfig resource.
SET_VSPHERE_METALLB_RANGEthat is the MetalLB range of IP addresses to assign to load balancers for Kubernetes Services.
To obtain the
VSPHERE_METALLB_RANGEparameter for the selected vSphere network, contact your vSphere administrator who provides you with the IP ranges dedicated to your environment.
For RHEL deployments, fill out
RHEL license configuration
Use one of the following set of parameters for RHEL machines subscription:
The user name and password of your RedHat Customer Portal account associated with your RHEL license for Virtual Datacenters.
Optionally, provide the subscription allocation pools to use for the RHEL subscription activation. If not needed, remove the
subscription-managerto automatically select the licenses for machines.
spec: username: <username> password: value: <password> poolIDs: - <pool1> - <pool2>
The activation key and organization ID associated with your RedHat account with RHEL license for Virtual Datacenters. The activation key can be created by the organization administrator on the RedHat Customer Portal.
If you use the RedHat Satellite server for management of your RHEL infrastructure, you can provide a pre-generated activation key from that server. In this case:
Provide the URL to the RedHat Satellite RPM for installation of the CA certificate that belongs to that server.
squid-proxyon the management or regional cluster to allow access to your Satellite server. For details, see Configure squid-proxy.
spec: activationKey: value: <activation key> orgID: "<organization ID>" rpmUrl: <rpm url>
For RHEL 8.7 TechPreview, verify mirrors configuration for your activation key. For more details, see RHEL 8 mirrors configuration.
Provide only one set of parameters. Mixing the parameters from different activation methods will cause deployment failure.
For CentOS deployments, in
templates/vsphere/rhellicenses.yaml.template, remove all lines under
Configure NTP server.
Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (
*.ubuntu.pool.ntp.org) are accessible from the node where the regional cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.
Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of
chronyconfiguration by Container Cloud to use your own system for
chronymanagement. Otherwise, configure the regional NTP server parameters as described below.
Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
templates/vsphere/cluster.yaml.template, add the
ntp:serverssection with the list of required server names:
spec: ... providerSpec: value: kaas: ... ntpEnabled: true regional: - helmReleases: - name: <providerName>-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ... provider: <providerName> ...
To disable NTP:
spec: ... providerSpec: value: ... ntpEnabled: false ...
Prepare the VM template as described in Prepare the virtual machine template.
templates/vsphere/machines.yaml.template, define the following parameters:
RHEL license name defined in
rhellicenses.yaml.template, defaults to
kaas-mgmt-rhel-license. Remove or comment out this parameter for CentOS and Ubuntu deployments.
Disk size in GiB for machines that must match the disk size of the VM template. You can leave this parameter commented to use the disk size of the VM template. The minimum requirement is 120 GiB.
Path to the VM template prepared in the previous step.
spec: providerSpec: value: apiVersion: vsphere.cluster.k8s.io/v1alpha1 kind: VsphereMachineProviderSpec rhelLicense: <rhelLicenseName> numCPUs: 8 memoryMiB: 32768 # diskGiB: 120 template: <vSphereVMTemplatePath>
Also, modify other parameters if required.
Available since Container Cloud 2.24.0 as Technology Preview. Optional. Enable custom host names for cluster machines. When enabled, any machine host name in a particular region matches the related
Machineobject name. For example, instead of the default
kaas-node-<UID>, a machine host name will be
master-0. The custom naming format is more convenient and easier to operate with.
To enable the feature on the management or regional and its future managed clusters, add the following environment variable:
Optional. If you require all Internet access to go through a proxy server, in
bootstrap.env, add the following environment variables to bootstrap the regional cluster using proxy:
export HTTP_PROXY=http://proxy.example.com:3128 export HTTPS_PROXY=http://user:firstname.lastname@example.org:3128 export NO_PROXY=172.18.10.0,registry.internal.lan export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
The following formats of variables are accepted:
http://proxy.example.com:port- for anonymous access.
http://user:email@example.com:port- for restricted access.
Comma-separated list of IP addresses or domain names. Mandatory to add
host[:port]of the vCenter server.
Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.
If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.
For MOSK-based deployments, the parameter is generally available since MOSK 22.4.
For implementation details, see Proxy and cache support.
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for a VMware vSphere-based cluster.
Export the following parameters:
export KAAS_VSPHERE_ENABLED=true export KUBECONFIG=<pathToMgmtClusterKubeconfig> export REGIONAL_CLUSTER_NAME=<newRegionalClusterName> export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
REGIONAL_CLUSTER_NAMEparameters values must contain only lowercase alphanumeric characters, hyphens, or periods.
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export
SSH_KEY_NAME. It is required for the management cluster to create a
publicKeyKubernetes CRD with the public part of your newly generated
ssh_keyfor the regional cluster.
Run the regional cluster bootstrap script:
When the bootstrap is complete, obtain and save in a secure location the
kubeconfig-<regionalClusterName>file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional
ssh_keywill be generated. Make sure to save this key in a secure location as well.
Prepare the bootstrap cluster for the new regional cluster.
Load the updated Container Cloud CRDs for
Machineswith information about the new regional cluster to the management cluster.
Connect to each machine of the management cluster through SSH.
Wait for the
Clusterobjects of the new regional cluster to be ready on the management cluster.
Load the following objects to the new regional cluster:
Secretwith the management cluster
ClusterRolefor the Container Cloud provider.
Forward the bootstrap cluster endpoint to
Wait for all CRDs to be available and verify the objects created using these CRDs.
Pivot the cluster API stack to the regional cluster.
Switch the LCM Agent from the bootstrap cluster to the regional one.
Wait for the Container Cloud components to start on the regional cluster.
Verify that network addresses used on your clusters do not overlap with the following default MKE network addresses for Swarm and MCR:
10.0.0.0/16is used for Swarm networks. IP addresses from this network are virtual.
10.99.0.0/16is used for MCR networks. IP addresses from this network are allocated on hosts.
Verification of Swarm and MCR network addresses
To verify Swarm and MCR network addresses, run on any master node:
Example of system response:
Server: ... Swarm: ... Default Address Pool: 10.0.0.0/16 SubnetSize: 24 ... Default Address Pools: Base: 10.99.0.0/16, Size: 20 ...
Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress network is created by default and occupies the
10.0.0.0/24address block. Also, three MCR networks are created by default and occupy three address blocks:
To verify the actual networks state and addresses in use, run:
docker network ls docker network inspect <networkName>
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.