You can deploy an additional regional VMware vSphere-based cluster
to create managed clusters of several provider types or with different
configurations.
To deploy a vSphere-based regional cluster:
Log in to the node where you bootstrapped a management
cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
For clusters deployed using the Container Cloud release earlier than 2.11.0
or if you deleted the kaas-bootstrap folder, download and run
the Container Cloud bootstrap script:
IP address from the provided vSphere network for Kubernetes API load
balancer (Keepalived VIP).
SET_VSPHERE_DATASTORE
Name of the vSphere datastore. You can use different datastores
for vSphere Cluster API and vSphere Cloud Provider.
SET_VSPHERE_MACHINES_FOLDER
Path to a folder where the cluster machines metadata will be stored.
SET_VSPHERE_NETWORK_PATH
Path to a network for cluster machines.
SET_VSPHERE_RESOURCE_POOL_PATH
Path to a resource pool in which VMs will be created.
Note
To obtain the LB_HOST parameter for the selected vSphere
network, contact your vSphere administrator who provides you with the
IP ranges dedicated to your environment.
Modify other parameters if required. For example, add the corresponding
values for cidrBlocks in the spec::clusterNetwork::services
section.
Provide the following additional parameters for a proper network
setup on machines using embedded IP address management (IPAM)
in templates/vsphere/cluster.yaml.template:
Note
To obtain IPAM parameters for the selected vSphere network, contact
your vSphere administrator who provides you with IP ranges dedicated to your
environment only.
Enables IPAM. Recommended value is true for either DHCP or
non-DHCP networks.
SET_VSPHERE_NETWORK_CIDR
CIDR of the provided vSphere network. For example, 10.20.0.0/16.
SET_VSPHERE_NETWORK_GATEWAY
Gateway of the provided vSphere network.
SET_VSPHERE_CIDR_INCLUDE_RANGES
IP range for the cluster machines. Specify the range of the
provided CIDR. For example, 10.20.0.100-10.20.0.200.
If the DHCP network is used, this range must not intersect with
the DHCP range of the network.
SET_VSPHERE_CIDR_EXCLUDE_RANGES
Optional. IP ranges to be excluded from being assigned to
the cluster machines. The MetalLB range and SET_LB_HOST
should not intersect with the addresses for IPAM. For example,
10.20.0.150-10.20.0.170.
SET_VSPHERE_NETWORK_NAMESERVERS
List of nameservers for the provided vSphere network.
Add SET_VSPHERE_METALLB_RANGE that is the MetalLB range of IP
addresses to assign to load balancers for Kubernetes Services.
Note
To obtain the VSPHERE_METALLB_RANGE parameter for the
selected vSphere network, contact your vSphere administrator who
provides you with the IP ranges dedicated to your environment.
For RHEL deployments, fill out
templates/vsphere/rhellicenses.yaml.template.
RHEL license configuration
Use one of the following set of parameters for RHEL machines subscription:
The user name and password of your RedHat Customer Portal account
associated with your RHEL license for Virtual Datacenters.
Optionally, provide the subscription allocation pools to use for the RHEL
subscription activation. If not needed, remove the poolIDs field
for subscription-manager to automatically select the licenses for
machines.
The activation key and organization ID associated with your RedHat
account with RHEL license for Virtual Datacenters. The activation key
can be created by the organization administrator on the RedHat Customer
Portal.
If you use the RedHat Satellite server for management of your
RHEL infrastructure, you can provide a pre-generated activation key from
that server. In this case:
Provide the URL to the RedHat Satellite RPM for installation
of the CA certificate that belongs to that server.
Configure squid-proxy on the management or regional cluster to
allow access to your Satellite server. For details, see
Configure squid-proxy.
For RHEL 8.7 TechPreview, verify mirrors
configuration for your activation key. For more details, see
RHEL 8 mirrors configuration.
Warning
Provide only one set of parameters. Mixing the parameters
from different activation methods will cause deployment failure.
For CentOS deployments, in templates/vsphere/rhellicenses.yaml.template,
remove all lines under items:.
Configure NTP server.
Before Container Cloud 2.23.0, optional if servers from the Ubuntu NTP pool
(*.ubuntu.pool.ntp.org) are accessible from the node where
the regional cluster is being provisioned. Otherwise, configure the
regional NTP server parameters as described below.
Since Container Cloud 2.23.0, optionally disable NTP that is enabled by default. This option disables the
management of chrony configuration by Container Cloud to use your own
system for chrony management. Otherwise, configure the regional NTP server
parameters as described below.
NTP configuration
Configure the regional NTP server parameters to be applied to all machines
of regional and managed clusters in the specified region.
In templates/vsphere/cluster.yaml.template, add the ntp:servers section
with the list of required server names:
In templates/vsphere/machines.yaml.template, define the following
parameters:
rhelLicense
RHEL license name defined in rhellicenses.yaml.template, defaults to
kaas-mgmt-rhel-license. Remove or comment out this parameter for
CentOS and Ubuntu deployments.
diskGiB
Disk size in GiB for machines that must match the disk size of the VM
template. You can leave this parameter commented to use the disk size of
the VM template. The minimum requirement is 120 GiB.
template
Path to the VM template prepared in the previous step.
Available since Container Cloud 2.24.0 as Technology Preview.
Optional. Enable custom host names for cluster machines. When enabled, any
machine host name in a particular region matches the related Machine
object name. For example, instead of the default kaas-node-<UID>, a
machine host name will be master-0. The custom naming format is more
convenient and easier to operate with.
To enable the feature on the management or regional and its future managed
clusters, add the following environment variable:
exportCUSTOM_HOSTNAMES=true
Optional. If you require all Internet access to go through a proxy server,
in bootstrap.env, add the following environment variables
to bootstrap the regional cluster using proxy:
Substitute the parameters enclosed in angle brackets with the corresponding
values of your cluster.
Caution
The REGION and REGIONAL_CLUSTER_NAME parameters values
must contain only lowercase alphanumeric characters, hyphens,
or periods.
Note
If the bootstrap node for the regional cluster deployment is not
the same where you bootstrapped the management cluster, also
export SSH_KEY_NAME. It is required for the management
cluster to create a publicKey Kubernetes CRD with the
public part of your newly generated ssh_key
for the regional cluster.
exportSSH_KEY_NAME=<newRegionalClusterSshKeyName>
Run the regional cluster bootstrap script:
./bootstrap.shdeploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location
the kubeconfig-<regionalClusterName> file
located in the same directory as the bootstrap script.
This file contains the admin credentials for the regional cluster.
If the bootstrap node for the regional cluster deployment is not
the same where you bootstrapped the management cluster, a new
regional ssh_key will be generated.
Make sure to save this key in a secure location as well.
The workflow of the regional cluster bootstrap script¶
#
Description
1
Prepare the bootstrap cluster for the new regional cluster.
2
Load the updated Container Cloud CRDs for Credentials,
Cluster, and Machines with information about the new
regional cluster to the management cluster.
3
Connect to each machine of the management cluster through SSH.
4
Wait for the Machines and Cluster objects of the new regional
cluster to be ready on the management cluster.
5
Load the following objects to the new regional cluster: Secret
with the management cluster kubeconfig and ClusterRole for
the Container Cloud provider.
6
Forward the bootstrap cluster endpoint to helm-controller.
7
Wait for all CRDs to be available and verify the objects
created using these CRDs.
8
Pivot the cluster API stack to the regional cluster.
9
Switch the LCM Agent from the bootstrap cluster to the regional one.
10
Wait for the Container Cloud components to start on the regional
cluster.
Verify that network addresses used on your clusters do not overlap with
the following default MKE network addresses for Swarm and MCR:
10.0.0.0/16 is used for Swarm networks. IP addresses from this
network are virtual.
10.99.0.0/16 is used for MCR networks. IP addresses from this
network are allocated on hosts.
Verification of Swarm and MCR network addresses
To verify Swarm and MCR network addresses, run on any master node:
Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress
network is created by default and occupies the 10.0.0.0/24 address
block. Also, three MCR networks are created by default and occupy
three address blocks: 10.99.0.0/20, 10.99.16.0/20,
10.99.32.0/20.
To verify the actual networks state and addresses in use, run: