Create a management cluster for the OpenStack provider¶
This section describes how to create an OpenStack-based management cluster
using the Container Cloud Bootstrap web UI.
To create an OpenStack-based management cluster:
If you deploy Container Cloud on top of MOSK Victoria with Tungsten Fabric
and use the default security group for newly created load balancers, add the
following rules for the Kubernetes API server endpoint, Container Cloud
application endpoint, and for the MKE web UI and API using the OpenStack CLI:
direction='ingress'
ethertype='IPv4'
protocol='tcp'
remote_ip_prefix='0.0.0.0/0'
port_range_max and port_range_min:
'443' for Kubernetes API and Container Cloud application endpoints
Optional. Recommended. Leave the
Guided Bootstrap configuration check box selected.
It enables the cluster creation helper in the next window with a series
of guided steps for a complete setup of a functional management cluster.
The cluster creation helper contains the same configuration windows as in
separate tabs of the left-side menu, but the helper enables the
configuration of essential provider components one-by-one inside one modal
window.
If you select this option, use the corresponding steps of this procedure
described below for description of each tab in
Guided Bootstrap configuration.
Click Save.
In the Status column of the Bootstrap page,
monitor the bootstrap region readiness by hovering over the status icon of
the bootstrap region.
Once the orange blinking status icon becomes green and Ready,
the bootstrap region deployment is complete. If the cluster status
is Error, refer to Troubleshooting.
You can monitor live deployment status of the following bootstrap region
components:
Component
Status description
Helm
Installation status of bootstrap Helm releases
Provider
Status of provider configuration and installation for related charts
and Deployments
Deployments
Readiness of all Deployments in the bootstrap cluster
Configure credentials for the new cluster.
Credentials configuration
In the Credentials tab:
Click Add Credential to add your OpenStack credentials.
You can either upload your OpenStack clouds.yaml configuration
file or fill in the fields manually.
Verify that the new credentials status is Ready.
If the status is Error, hover over the status to determine
the reason of the issue.
Optional. In the SSH Keys tab, click Add SSH Key
to upload the public SSH key(s) for VMs creation.
Optional. Enable proxy access to the cluster.
Proxy configuration
In the Proxies tab, configure proxy:
Click Add Proxy.
In the Add New Proxy wizard, fill out the form
with the following parameters:
If your proxy requires a trusted CA certificate, select the
CA Certificate check box and paste a CA certificate for a MITM
proxy to the corresponding field or upload a certificate using
Upload Certificate.
In the Clusters tab, click Create Cluster
and fill out the form with the following parameters:
Cluster configuration
Add Cluster name.
Set the provider Service User Name and
Service User Password.
Service user is the initial user to create in Keycloak for
access to a newly deployed management cluster. By default, it has the
global-admin, operator (namespaced), and bm-pool-operator
(namespaced) roles.
You can delete serviceuser after setting up other required users with
specific roles or after any integration with an external identity provider,
such as LDAP.
Configure general provider settings and Kubernetes parameters:
Technology Preview: select Boot From Volume to boot the
Bastion node from a block storage volume and select the required amount
of storage (80 GB is enough).
Kubernetes
Node CIDR
The Kubernetes nodes CIDR block. For example, 10.10.10.0/24.
Services CIDR Blocks
The Kubernetes Services CIDR block. For example, 10.233.0.0/18.
Pods CIDR Blocks
The Kubernetes Pods CIDR block. For example, 10.233.64.0/18.
Note
The network subnet size of Kubernetes pods influences the number of
nodes that can be deployed in the cluster.
The default subnet size /18 is enough to create a cluster with
up to 256 nodes. Each node uses the /26 address blocks
(64 addresses), at least one address block is allocated per node.
These addresses are used by the Kubernetes pods with
hostNetwork:false. The cluster size may be limited
further when some nodes use more than one address block.
Configure StackLight:
StackLight configuration
Click Create.
Add machines to the bootstrap cluster:
Machines configuration
In the Clusters tab, click the required cluster name.
The cluster page with Machines list opens.
Specify the odd number of machines to create. Only
Manager machines are allowed.
Caution
The required minimum number of manager machines is three for HA.
A cluster can have more than three manager machines but only an odd number of
machines.
In an even-sized cluster, an additional machine remains in the Pending
state until an extra manager machine is added.
An even number of manager machines does not provide additional fault
tolerance but increases the number of node required for etcd quorum.
Flavor
From the drop-down list, select the required hardware
configuration for the machine. The list of available flavors
corresponds to the one in your OpenStack environment.
A Container Cloud cluster based on both Ubuntu and
CentOS operating systems is not supported.
Availability Zone
From the drop-down list, select the availability zone from which
the new machine will be launched.
Configure Server Metadata
Optional. Select Configure Server Metadata and add
the required number of string key-value pairs for the machine
meta_data configuration in cloud-init.
Prohibited keys are: KaaS, cluster, clusterID,
namespace as they are used by Container Cloud.
Boot From Volume
Optional. Technology Preview. Select to boot a machine from a block
storage volume. Use the Up and Down arrows
in the Volume Size (GiB) field to define the required
volume size.
This option applies to clouds that do not have enough space on
hypervisors. After enabling this option, the Cinder storage
is used instead of the Nova storage.
Click Create.
Optional. Using the Container Cloud CLI, modify the provider-specific and
other cluster settings as described in Configure optional cluster settings.
Select from the following options to start cluster deployment:
If you use the Guided Bootstrap configuration
Click Deploy.
If you use the left-side web UI menu
Approve the previously created bootstrap region using the Container Cloud
CLI:
Once you approve the bootstrap region, no cluster or
machine modification is allowed.
Monitor the deployment progress of the cluster and machines.
Monitoring of the cluster readiness
To monitor the cluster readiness, hover over the status icon of a specific
cluster in the Status column of the Clusters page.
Once the orange blinking status icon becomes green and Ready,
the cluster deployment or update is complete.
You can monitor live deployment status of the following cluster components:
Component
Description
Bastion
For the OpenStack-based management clusters, the Bastion node
IP address status that confirms the Bastion node creation
Helm
Installation or upgrade status of all Helm releases
Kubelet
Readiness of the node in a Kubernetes cluster, as reported by kubelet
Kubernetes
Readiness of all requested Kubernetes objects
Nodes
Equality of the requested nodes number in the cluster to the number
of nodes having the Ready LCM status
OIDC
Readiness of the cluster OIDC configuration
StackLight
Health of all StackLight-related objects in a Kubernetes cluster
Swarm
Readiness of all nodes in a Docker Swarm cluster
LoadBalancer
Readiness of the Kubernetes API load balancer
ProviderInstance
Readiness of all machines in the underlying infrastructure
(virtual or bare metal, depending on the provider type)
Graceful Reboot
Readiness of a cluster during a scheduled graceful reboot,
available since Cluster releases 15.0.1 and 14.0.0.
Infrastructure Status
Available since Container Cloud 2.25.0 for bare metal and OpenStack
providers. Readiness of the following cluster components:
Bare metal: the MetalLBConfig object along with MetalLB and DHCP
subnets.
OpenStack: cluster network, routers, load balancers, and Bastion
along with their ports and floating IPs.
LCM Operation
Available since Container Cloud 2.26.0 (Cluster releases 17.1.0 and
16.1.0). Health of all LCM operations on the cluster and its machines.
LCM Agent
Available since Container Cloud 2.27.0 (Cluster releases 17.2.0 and
16.2.0). Health of all LCM agents on cluster machines and the status of
LCM agents update to the version from the current Cluster release.
To monitor machines readiness, use the status icon of a specific machine on
the Clusters page.
Quick status
On the Clusters page, in the Managers column.
The green status icon indicates that the machine is Ready,
the orange status icon indicates that the machine is Updating.
Detailed status
In the Machines section of a particular cluster
page, in the Status column. Hover over a particular machine
status icon to verify the deploy or update status of a
specific machine component.
You can monitor the status of the following machine components:
Component
Description
Kubelet
Readiness of a node in a Kubernetes cluster.
Swarm
Health and readiness of a node in a Docker Swarm cluster.
LCM
LCM readiness status of a node.
ProviderInstance
Readiness of a node in the underlying infrastructure
(virtual or bare metal, depending on the provider type).
Graceful Reboot
Readiness of a machine during a scheduled graceful reboot of a cluster,
available since Cluster releases 15.0.1 and 14.0.0.
Infrastructure Status
Available since Container Cloud 2.25.0 for the bare metal provider only.
Readiness of the IPAMHost, L2Template, BareMetalHost, and
BareMetalHostProfile objects associated with the machine.
LCM Operation
Available since Container Cloud 2.26.0 (Cluster releases 17.1.0 and
16.1.0). Health of all LCM operations on the machine.
LCM Agent
Available since Container Cloud 2.27.0 (Cluster releases 17.2.0 and
16.2.0). Health of the LCM Agent on the machine and the status of the
LCM Agent update to the version from the current Cluster release.
The machine creation starts with the Provision status.
During provisioning, the machine is not expected to be accessible
since its infrastructure (VM, network, and so on) is being created.
Other machine statuses are the same as the LCMMachine object states:
Uninitialized - the machine is not yet assigned to an LCMCluster.
Pending - the agent reports a node IP address and host name.
Prepare - the machine executes StateItems that correspond
to the prepare phase. This phase usually involves downloading
the necessary archives and packages.
Deploy - the machine executes StateItems that correspond
to the deploy phase that is becoming a Mirantis Kubernetes Engine (MKE)
node.
Ready - the machine is being deployed.
Upgrade - the machine is being upgraded to the new MKE version.
Reconfigure - the machine executes StateItems that correspond
to the reconfigure phase. The machine configuration is being updated
without affecting workloads running on the machine.
Once the status changes to Ready, the deployment of the cluster
components on this machine is complete.
You can also monitor the live machine status using API:
kubectlgetmachines<machineName>-owide
Example of system response since Container Cloud 2.23.0:
Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress
network is created by default and occupies the 10.0.0.0/24 address
block. Also, three MCR networks are created by default and occupy
three address blocks: 10.99.0.0/20, 10.99.16.0/20,
10.99.32.0/20.
To verify the actual networks state and addresses in use, run: