Deploy a VMware vSphere-based regional cluster¶
You can deploy an additional regional VMware vSphere-based cluster to create managed clusters of several provider types or with different configurations.
To deploy a vSphere-based regional cluster:
Log in to the node where you bootstrapped a management cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \ --target-dir <pathToBootstrapDirectory>
For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the
kaas-bootstrap
folder, download and run the Container Cloud bootstrap script:wget https://binary.mirantis.com/releases/get_container_cloud.sh chmod 0755 get_container_cloud.sh ./get_container_cloud.sh
Verify access to the target vSphere cluster from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \ curl https://vsphere.server.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
Prepare deployment templates:
Modify
templates/vsphere/vsphere-config.yaml.template
:Note
Contact your vSphere administrator to provide you with the below parameters.
vSphere configuration data¶ Parameter
Description
SET_VSPHERE_SERVER
IP address or FQDN of the vCenter Server.
SET_VSPHERE_SERVER_PORT
Port of the vCenter Server. For example,
port: "8443"
. Leave empty to use"443"
by default.SET_VSPHERE_DATACENTER
vSphere data center name.
SET_VSPHERE_SERVER_INSECURE
Flag that controls validation of the vSphere Server certificate. Must be
true
orfalse
.SET_VSPHERE_CAPI_PROVIDER_USERNAME
vSphere Cluster API provider user name that you added when preparing the deployment user setup and permissions.
SET_VSPHERE_CAPI_PROVIDER_PASSWORD
vSphere Cluster API provider user password.
SET_VSPHERE_CLOUD_PROVIDER_USERNAME
vSphere Cloud Provider deployment user name that you added when preparing the deployment user setup and permissions.
SET_VSPHERE_CLOUD_PROVIDER_PASSWORD
vSphere Cloud Provider deployment user password.
Modify the
templates/vsphere/cluster.yaml.template
parameters:Modify the following required network parameters:
Required parameters¶ Parameter
Description
SET_LB_HOST
IP address from the provided vSphere network for load balancer (Keepalived).
SET_VSPHERE_METALLB_RANGE
MetalLB range of IP addresses that can be assigned to load balancers for Kubernetes Services.
SET_VSPHERE_DATASTORE
Name of the vSphere datastore. You can use different datastores for vSphere Cluster API and vSphere Cloud Provider.
SET_VSPHERE_MACHINES_FOLDER
Path to a folder where the cluster machines metadata will be stored.
SET_VSPHERE_NETWORK_PATH
Path to a network for cluster machines.
SET_VSPHERE_RESOURCE_POOL_PATH
Path to a resource pool in which VMs will be created.
Note
To obtain the
LB_HOST
andVSPHERE_METALLB_RANGE
parameters for the selected vSphere network, contact your vSphere administrator who provides you with IP ranges dedicated to your environment only.Modify other parameters if required. For example, add the corresponding values for
cidrBlocks
in thespec::clusterNetwork::services
section.For either DHCP or non-DHCP vSphere network:
Determine the vSphere network parameters as described in VMware vSphere network objects and IPAM recommendations.
Provide the following additional parameters for a proper network setup on machines using embedded IP address management (IPAM) in
templates/vsphere/cluster.yaml.template
Note
To obtain IPAM parameters for the selected vSphere network, contact your vSphere administrator who provides you with IP ranges dedicated to your environment only.
vSphere configuration data¶ Parameter
Description
ipamEnabled
Enables IPAM. Recommended value is
true
for either DHCP or non-DHCP networks.SET_VSPHERE_NETWORK_CIDR
CIDR of the provided vSphere network. For example,
10.20.0.0/16
.SET_VSPHERE_NETWORK_GATEWAY
Gateway of the provided vSphere network.
SET_VSPHERE_CIDR_INCLUDE_RANGES
IP range for the cluster machines. Specify the range of the provided CIDR. For example,
10.20.0.100-10.20.0.200
. If the DHCP network is used, this range must not intersect with the DHCP range of the network.SET_VSPHERE_CIDR_EXCLUDE_RANGES
Optional. IP ranges to be excluded from being assigned to the cluster machines. The MetalLB range and
SET_LB_HOST
should not intersect with the addresses for IPAM. For example,10.20.0.150-10.20.0.170
.SET_VSPHERE_NETWORK_NAMESERVERS
List of nameservers for the provided vSphere network.
For RHEL deployments, fill out
templates/vsphere/rhellicenses.yaml.template
using one of the following set of parameters for RHEL machines subscription:The user name and password of your RedHat Customer Portal account associated with your RHEL license for Virtual Datacenters.
Optionally, provide the subscription allocation pools to use for the RHEL subscription activation. If not needed, remove the
poolIDs
field forsubscription-manager
to automatically select the licenses for machines.For example:
spec: username: <username> password: value: <password> poolIDs: - <pool1> - <pool2>
The activation key and organization ID associated with your RedHat account with RHEL license for Virtual Datacenters. The activation key can be created by the organization administrator on the RedHat Customer Portal.
If you use the RedHat Satellite server for management of your RHEL infrastructure, you can provide a pre-generated activation key from that server. In this case:
Provide the URL to the RedHat Satellite RPM for installation of the CA certificate that belongs to that server.
Configure
squid-proxy
on the management or regional cluster to allow access to your Satellite server. For details, see Configure squid-proxy.
For example:
spec: activationKey: value: <activation key> orgID: "<organization ID>" rpmUrl: <rpm url>
Caution
For RHEL 8.4 TechPreview, verify mirrors configuration for your activation key. For more details, see RHEL 8 mirrors configuration.
Warning
Provide only one set of parameters. Mixing the parameters from different activation methods will cause deployment failure.
For CentOS deployments, in
templates/vsphere/rhellicenses.yaml.template
, remove all lines underitems:
.
Configure NTP server.
Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool (
*.ubuntu.pool.ntp.org
) are accessible from the node where the regional cluster is being provisioned. Otherwise, configure the regional NTP server parameters as described below.Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the management of
chrony
configuration by Container Cloud to use your own system forchrony
management. Otherwise, configure the regional NTP server parameters as described below.NTP configuration
Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In
templates/vsphere/cluster.yaml.template
, add thentp:servers
section with the list of required server names:spec: ... providerSpec: value: kaas: ... ntpEnabled: true regional: - helmReleases: - name: <providerName>-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ... provider: <providerName> ...
To disable NTP:
spec: ... providerSpec: value: ... ntpEnabled: false ...
Prepare the VM template as described in Prepare the virtual machine template.
In
templates/vsphere/machines.yaml.template
, define the following parameters:rhelLicense
RHEL license name defined in
rhellicenses.yaml.template
, defaults tokaas-mgmt-rhel-license
. Remove or comment out this parameter for CentOS and Ubuntu deployments.
diskGiB
Disk size in GiB for machines that must match the disk size of the VM template. You can leave this parameter commented to use the disk size of the VM template. The minimum requirement is 120 GiB.
template
Path to the VM template prepared in the previous step.
Sample template:
spec: providerSpec: value: apiVersion: vsphere.cluster.k8s.io/v1alpha1 kind: VsphereMachineProviderSpec rhelLicense: <rhelLicenseName> numCPUs: 8 memoryMiB: 24576 # diskGiB: 120 template: <vSphereVMTemplatePath>
Also, modify other parameters if required.
Optional. If you require all Internet access to go through a proxy server, in
bootstrap.env
, add the following environment variables to bootstrap the regional cluster using proxy:HTTP_PROXY
HTTPS_PROXY
NO_PROXY
PROXY_CA_CERTIFICATE_PATH
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128 export HTTPS_PROXY=http://user:pass@proxy.example.com:3128 export NO_PROXY=172.18.10.0,registry.internal.lan export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
The following formats of variables are accepted:
Proxy configuration data¶ Variable
Format
HTTP_PROXY
HTTPS_PROXY
http://proxy.example.com:port
- for anonymous access.http://user:password@proxy.example.com:port
- for restricted access.
NO_PROXY
Comma-separated list of IP addresses or domain names. Mandatory to add
host[:port]
of the vCenter server.PROXY_CA_CERTIFICATE_PATH
Optional. Absolute path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.
Warning
If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.
Note
This parameter is generally available for the OpenStack, bare metal, Equinix Metal with private networking, AWS, and vSphere providers.
For MOSK-based deployments, the parameter is generally available since MOSK 22.4.
For Azure and Equinix Metal with public networking, the feature is not supported.
For implementation details, see Proxy and cache support.
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for a VMware vSphere-based cluster.
Export the following parameters:
export KAAS_VSPHERE_ENABLED=true export KUBECONFIG=<pathToMgmtClusterKubeconfig> export REGIONAL_CLUSTER_NAME=<newRegionalClusterName> export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
Caution
The
REGION
andREGIONAL_CLUSTER_NAME
parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.Note
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export
SSH_KEY_NAME
. It is required for the management cluster to create apublicKey
Kubernetes CRD with the public part of your newly generatedssh_key
for the regional cluster.export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location the
kubeconfig-<regionalClusterName>
file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional
ssh_key
will be generated. Make sure to save this key in a secure location as well.The workflow of the regional cluster bootstrap script¶ #
Description
1
Prepare the bootstrap cluster for the new regional cluster.
2
Load the updated Container Cloud CRDs for
Credentials
,Cluster
, andMachines
with information about the new regional cluster to the management cluster.3
Connect to each machine of the management cluster through SSH.
4
Wait for the
Machines
andCluster
objects of the new regional cluster to be ready on the management cluster.5
Load the following objects to the new regional cluster:
Secret
with the management clusterkubeconfig
andClusterRole
for the Container Cloud provider.6
Forward the bootstrap cluster endpoint to
helm-controller
.7
Wait for all CRDs to be available and verify the objects created using these CRDs.
8
Pivot the cluster API stack to the regional cluster.
9
Switch the LCM Agent from the bootstrap cluster to the regional one.
10
Wait for the Container Cloud components to start on the regional cluster.
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.
See also