Deploy an OpenStack-based regional cluster¶
You can deploy an additional regional OpenStack-based cluster to create managed clusters of several provider types or with different configurations.
To deploy an OpenStack-based regional cluster:
Log in to the node where you bootstrapped a management cluster.
Verify that the bootstrap directory is updated.
Select from the following options:
For clusters deployed using Container Cloud 2.11.0 or later:
./container-cloud bootstrap download --management-kubeconfig <pathToMgmtKubeconfig> \ --target-dir <pathToBootstrapDirectory>
For clusters deployed using the Container Cloud release earlier than 2.11.0 or if you deleted the
kaas-bootstrap
folder, download and run the Container Cloud bootstrap script:wget https://binary.mirantis.com/releases/get_container_cloud.sh chmod 0755 get_container_cloud.sh ./get_container_cloud.sh
Prepare the OpenStack configuration for a new regional cluster:
Log in to the OpenStack Horizon.
In the Project section, select API Access.
In the right-side drop-down menu Download OpenStack RC File, select OpenStack clouds.yaml File.
Save the downloaded
clouds.yaml
file in thekaas-bootstrap
folder created by theget_container_cloud.sh
script.In
clouds.yaml
, add thepassword
field with your OpenStack password under theclouds/openstack/auth
section.Example:
clouds: openstack: auth: auth_url: https://auth.openstack.example.com/v3 username: your_username password: your_secret_password project_id: your_project_id user_domain_name: your_user_domain_name region_name: RegionOne interface: public identity_api_version: 3
Available since Container Cloud 2.17.0. If you deploy Container Cloud on top of MOSK Victoria with Tungsten Fabric and use the default security group for newly created load balancers, add the following rules for the Kubernetes API server endpoint, Container Cloud application endpoint, and for the MKE web UI and API using the OpenStack CLI:
direction='ingress'
ethertype='IPv4'
protocol='tcp'
remote_ip_prefix='0.0.0.0/0'
port_range_max
andport_range_min
:'443'
for Kubernetes API and Container Cloud application endpoints'6443'
for MKE web UI and API
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \ curl https://auth.openstack.example.com/v3"
The system output must contain no error records.
In case of issues, follow the steps provided in Troubleshooting.
Configure the cluster and machines metadata:
In
templates/machines.yaml.template
, modify thespec:providerSpec:value
section for 3 control plane nodes marked with thecluster.sigs.k8s.io/control-plane
label by substituting theflavor
andimage
parameters with the corresponding values of the control plane nodes in the related OpenStack cluster. For example:spec: &cp_spec providerSpec: value: apiVersion: "openstackproviderconfig.k8s.io/v1alpha1" kind: "OpenstackMachineProviderSpec" flavor: kaas.minimal image: bionic-server-cloudimg-amd64-20190612
Note
The
flavor
parameter value provided in the example above is cloud-specific and must meet the Container Cloud requirements.Also, modify other parameters as required.
Modify the
templates/cluster.yaml.template
parameters to fit your deployment. For example, add the corresponding values forcidrBlocks
in thespec::clusterNetwork::services
section.
Optional. Available since Container Cloud 2.18.0 as TechPreview. To boot cluster machines from a block storage volume, define the following parameter in the
spec:providerSpec
section oftemplates/machines.yaml.template
:bootFromVolume: enabled: true volumeSize: 120
Note
The minimal storage requirement is 120 GB per node. For details, see Requirements for an OpenStack-based cluster.
To boot the Bastion node from a volume, add the same parameter to
templates/cluster.yaml.template
in thespec:providerSpec
section for Bastion. The default amount of storage80
is enough.Optional if servers from the Ubuntu NTP pool (
*.ubuntu.pool.ntp.org
) are accessible from the node where the regional cluster is being provisioned. Otherwise, this step is mandatory.Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In
templates/cluster.yaml.template
, add thentp:servers
section with the list of required servers names:spec: ... providerSpec: value: kaas: ... regional: - helmReleases: - name: openstack-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ... provider: openstack ...
Optional. If you require all Internet access to go through a proxy server, in
bootstrap.env
, add the following environment variables to bootstrap the regional cluster using proxy:HTTP_PROXY
HTTPS_PROXY
NO_PROXY
PROXY_CA_CERTIFICATE_PATH
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128 export HTTPS_PROXY=http://user:pass@proxy.example.com:3128 export NO_PROXY=172.18.10.0,registry.internal.lan export PROXY_CA_CERTIFICATE_PATH="/home/ubuntu/.mitmproxy/mitmproxy-ca-cert.cer"
The following formats of variables are accepted:
Proxy configuration data¶ Variable
Format
HTTP_PROXY
HTTPS_PROXY
http://proxy.example.com:port
- for anonymous accesshttp://user:password@proxy.example.com:port
- for restricted access
NO_PROXY
Comma-separated list of IP addresses or domain names
PROXY_CA_CERTIFICATE_PATH
Available since 2.20.0 as GAOptional. Path to the proxy CA certificate for man-in-the-middle (MITM) proxies. Must be placed on the bootstrap node to be trusted. For details, see Install a CA certificate for a MITM proxy on a bootstrap node.
Warning
If you require Internet access to go through a MITM proxy, ensure that the proxy has streaming enabled as described in Enable streaming for MITM.
Note
Since Container Cloud 2.20.0, this parameter is generally available for the OpenStack, bare metal, Equinix Metal with private networking, AWS, and vSphere providers
For MOSK-based deployments, the feature support is available since MOSK 22.4
Since Container Cloud 2.18.0, this parameter is available as TechPreview for the OpenStack and bare metal providers only
For Azure and Equinix Metal with public networking, the feature is not supported
For implementation details, see Proxy and cache support.
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Requirements for an OpenStack-based cluster.
Optional, Technology Preview in Container Cloud 2.18.0. Removed in Container Cloud 2.19.0 for compatibility reasons, currently not supported. Enables encryption for the Kubernetes workloads network using the following field to the
Cluster
object spec:spec: providerSpec: value: secureOverlay: true
For more details, see MKE documentation: Kubernetes network encryption.
When the option is enabled, Calico networking is configured to use IP-in-IP overlay and BGP routing.
When the option is disabled, Calico networking is configured to use VXLAN overlay (no BGP).
Clean up the environment configuration:
If you are deploying the regional cluster on top of a baremetal-based management cluster, unset the following parameters:
unset KAAS_BM_ENABLED KAAS_BM_FULL_PREFLIGHT KAAS_BM_PXE_IP \ KAAS_BM_PXE_MASK KAAS_BM_PXE_BRIDGE KAAS_BM_BM_DHCP_RANGE \ TEMPLATES_DIR
If you are deploying the regional cluster on top of an AWS-based management cluster, unset the
KAAS_AWS_ENABLED
parameter:unset KAAS_AWS_ENABLED
Note
If you are deploying the regional cluster on top of a management cluster of other supported cloud providers, skip this step.
Export the following parameters:
export KUBECONFIG=<pathToMgmtClusterKubeconfig> export REGIONAL_CLUSTER_NAME=<newRegionalClusterName> export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
Caution
The
REGION
andREGIONAL_CLUSTER_NAME
parameters values must contain only lowercase alphanumeric characters, hyphens, or periods.Note
If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, also export
SSH_KEY_NAME
. It is required for the management cluster to create apublicKey
Kubernetes CRD with the public part of your newly generatedssh_key
for the regional cluster.export SSH_KEY_NAME=<newRegionalClusterSshKeyName>
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location the
kubeconfig-<regionalClusterName>
file located in the same directory as the bootstrap script. This file contains the admin credentials for the regional cluster.If the bootstrap node for the regional cluster deployment is not the same where you bootstrapped the management cluster, a new regional
ssh_key
will be generated. Make sure to save this key in a secure location as well.The workflow of the regional cluster bootstrap script¶ #
Description
1
Prepare the bootstrap cluster for the new regional cluster.
2
Load the updated Container Cloud CRDs for
Credentials
,Cluster
, andMachines
with information about the new regional cluster to the management cluster.3
Connect to each machine of the management cluster through SSH.
4
Wait for the
Machines
andCluster
objects of the new regional cluster to be ready on the management cluster.5
Load the following objects to the new regional cluster:
Secret
with the management clusterkubeconfig
andClusterRole
for the Container Cloud provider.6
Forward the bootstrap cluster endpoint to
helm-controller
.7
Wait for all CRDs to be available and verify the objects created using these CRDs.
8
Pivot the cluster API stack to the regional cluster.
9
Switch the LCM Agent from the bootstrap cluster to the regional one.
10
Wait for the Container Cloud components to start on the regional cluster.
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate managed clusters.
See also