Deploy a management cluster using the Container Cloud API

This section contains an overview of the cluster-related objects along with the configuration procedure of these objects during deployment of a management cluster using Bootstrap v2 through the Container Cloud API.

Deploy a management cluster using CLI

The following procedure describes how to prepare and deploy a management cluster using Bootstrap v2 by operating YAML templates available in the kaas-bootstrap/templates/ folder.

To deploy a management cluster using CLI:

  1. Set up a bootstrap cluster.

  2. Export kubeconfig of the kind cluster:

    export KUBECONFIG=<pathToKindKubeconfig>
    

    By default, <pathToKindKubeconfig> is $HOME/.kube/kind-config-clusterapi.

  3. For the bare metal provider, configure BIOS on a bare metal host.

  4. For the OpenStack provider, prepare the OpenStack configuration.

    OpenStack configuration
    1. Log in to the OpenStack Horizon.

    2. In the Project section, select API Access.

    3. In the right-side drop-down menu Download OpenStack RC File, select OpenStack clouds.yaml File.

    4. Save the downloaded clouds.yaml file in the kaas-bootstrap folder created by the get_container_cloud.sh script.

    5. In clouds.yaml, add the password field with your OpenStack password under the clouds/openstack/auth section.

      Example:

      clouds:
        openstack:
          auth:
            auth_url: https://auth.openstack.example.com/v3
            username: your_username
            password: your_secret_password
            project_id: your_project_id
            user_domain_name: your_user_domain_name
          region_name: RegionOne
          interface: public
          identity_api_version: 3
      
    6. If you deploy Container Cloud on top of MOSK Victoria with Tungsten Fabric and use the default security group for newly created load balancers, add the following rules for the Kubernetes API server endpoint, Container Cloud application endpoint, and for the MKE web UI and API using the OpenStack CLI:

      • direction='ingress'

      • ethertype='IPv4'

      • protocol='tcp'

      • remote_ip_prefix='0.0.0.0/0'

      • port_range_max and port_range_min:

        • '443' for Kubernetes API and Container Cloud application endpoints

        • '6443' for MKE web UI and API

    7. Verify access to the target cloud endpoint from Docker. For example:

      docker run --rm alpine sh -c "apk add --no-cache curl; \
      curl https://auth.openstack.example.com/v3"
      

      The system output must contain no error records.

  5. Depending on the selected provider, navigate to one of the following locations:

    • Bare metal: kaas-bootstrap/templates/bm

    • OpenStack: kaas-bootstrap/templates

    • vSphere: kaas-bootstrap/templates/vsphere

    Warning

    The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying objects containing credentials. Such Container Cloud objects include:

    • BareMetalHostCredential

    • ByoCredential

    • ClusterOIDCConfiguration

    • License

    • OpenstackCredential

    • Proxy

    • RHELLicense

    • ServiceUser

    • TLSConfig

    • VsphereCredential

    Therefore, do not use kubectl apply on these objects. Use kubectl create, kubectl patch, or kubectl edit instead.

    If you used kubectl apply on these objects, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the objects using kubectl edit.

  6. Create the BootstrapRegion object by modifying bootstrapregion.yaml.template.

    Configuration of bootstrapregion.yaml.template
    1. Select from the following options:

      • Since Container Cloud 2.26.0 (Cluster releases 16.1.0 and 17.1.0), set the required <providerName> and use the default <regionName>, which is region-one.

      • Before Container Cloud 2.26.0, set the required <providerName> and <regionName>.

      apiVersion: kaas.mirantis.com/v1alpha1
      kind: BootstrapRegion
      metadata:
        name: <regionName>
        namespace: default
      spec:
        provider: <providerName>
      
    2. Create the object:

      ./kaas-bootstrap/bin/kubectl create -f \
          kaas-bootstrap/templates/<providerName>/bootstrapregion.yaml.template
      

    Note

    In the following steps, apply the changes to objects using the commands below with the required template name:

    • For bare metal:

      ./kaas-bootstrap/bin/kubectl create -f \
          kaas-bootstrap/templates/bm/<templateName>.yaml.template
      
    • For OpenStack:

      ./kaas-bootstrap/bin/kubectl create -f \
          kaas-bootstrap/templates/<templateName>.yaml.template
      
    • For vSphere:

      ./kaas-bootstrap/bin/kubectl create -f \
          kaas-bootstrap/templates/vsphere/<templateName>.yaml.template
      
  7. For the OpenStack and vSphere providers only. Create the Credentials object by modifying <providerName>-config.yaml.template.

    1. Add the provider-specific parameters:

      OpenStack

      Parameter

      Description

      SET_OS_AUTH_URL

      Identity endpoint URL.

      SET_OS_USERNAME

      OpenStack user name.

      SET_OS_PASSWORD

      Value of the OpenStack password. This field is available only when the user creates or changes password. Once the controller detects this field, it updates the password in the secret and removes the value field from the OpenStackCredential object.

      SET_OS_PROJECT_ID

      Unique ID of the OpenStack project.

      vSphere

      Note

      Contact your vSphere administrator to provide you with the values for the below parameters.

      Parameter

      Description

      SET_VSPHERE_SERVER

      IP address or FQDN of the vCenter Server.

      SET_VSPHERE_SERVER_PORT

      Port of the vCenter Server. For example, port: "8443". Leave empty to use "443" by default.

      SET_VSPHERE_DATACENTER

      vSphere data center name.

      SET_VSPHERE_SERVER_INSECURE

      Flag that controls validation of the vSphere Server certificate. Must be true or false.

      SET_VSPHERE_CAPI_PROVIDER_USERNAME

      vSphere Cluster API provider user name that you added when preparing the deployment user setup and permissions.

      SET_VSPHERE_CAPI_PROVIDER_PASSWORD

      vSphere Cluster API provider user password.

      SET_VSPHERE_CLOUD_PROVIDER_USERNAME

      vSphere Cloud Provider deployment user name that you added when preparing the deployment user setup and permissions.

      SET_VSPHERE_CLOUD_PROVIDER_PASSWORD

      vSphere Cloud Provider deployment user password.

    2. Skip this step since Container Cloud 2.26.0. Before this release, set the kaas.mirantis.com/region: <regionName> label that must match the BootstrapRegion object name.

    3. Skip this step since Container Cloud 2.26.0. Before this release, set the kaas.mirantis.com/regional-credential label to "true" to use the credentials for the management cluster deployment. For example, for vSphere:

      cat vsphere-config.yaml.template
      ---
      apiVersion: kaas.mirantis.com/v1alpha1
      kind: VsphereCredential
      metadata:
        name: cloud-config
        labels:
          kaas.mirantis.com/regional-credential: "true"
      spec:
        ...
      
    4. Verify that the credentials for the management cluster deployment are valid. For example, for vSphere:

      ./kaas-bootstrap/bin/kubectl get vspherecredentials <credsName> \
          -o yaml -o jsonpath='{.status.valid}'
      

      The output of the command must be "true". Otherwise, fix the issue with credentials before proceeding to the next step.

  8. Create the ServiceUser object by modifying serviceusers.yaml.template.

    Configuration of serviceusers.yaml.template

    Service user is the initial user to create in Keycloak for access to a newly deployed management cluster. By default, it has the global-admin, operator (namespaced), and bm-pool-operator (namespaced) roles.

    You can delete serviceuser after setting up other required users with specific roles or after any integration with an external identity provider, such as LDAP.

    apiVersion: kaas.mirantis.com/v1alpha1
    kind: ServiceUserList
    items:
    - apiVersion: kaas.mirantis.com/v1alpha1
      kind: ServiceUser
      metadata:
        name: SET_USERNAME
      spec:
        password:
          value: SET_PASSWORD
    
  9. Optional. Prepare any number of additional SSH keys using the following example:

    apiVersion: kaas.mirantis.com/v1alpha1
    kind: PublicKey
    metadata:
      name: <SSHKeyName>
      namespace: default
    spec:
      publicKey: |
        <insert your public key here>
    
  10. Optional. Add the Proxy object using the example below:

    apiVersion: kaas.mirantis.com/v1alpha1
    kind: Proxy
    metadata:
      labels:
        kaas.mirantis.com/region: <regionName>
      name: <proxyName>
      namespace: default
    spec:
      ...
    

    The region label must match the BootstrapRegion object name.

    Note

    The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

  11. Configure and apply the cluster configuration using cluster deployment templates:

    1. In cluster.yaml.template, set mandatory cluster labels:

      labels:
        kaas.mirantis.com/provider: <providerName>
        kaas.mirantis.com/region: <regionName>
      

      Note

      The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

    2. Configure provider-specific settings as required.

      Bare metal
      1. Inspect the default bare metal host profile definition in templates/bm/baremetalhostprofiles.yaml.template and adjust it to fit your hardware configuration. For details, see Customize the default bare metal host profile.

        Warning

        Any data stored on any device defined in the fileSystems list can be deleted or corrupted during cluster (re)deployment. It happens because each device from the fileSystems list is a part of the rootfs directory tree that is overwritten during (re)deployment.

        Examples of affected devices include:

        • A raw device partition with a file system on it

        • A device partition in a volume group with a logical volume that has a file system on it

        • An mdadm RAID device with a file system on it

        • An LVM RAID device with a file system on it

        The wipe field (deprecated) or wipeDevice structure (recommended since Container Cloud 2.26.0) have no effect in this case and cannot protect data on these devices.

        Therefore, to prevent data loss, move the necessary data from these file systems to another server beforehand, if required.

      2. In templates/bm/baremetalhosts.yaml.template, update the bare metal host definitions according to your environment configuration. Use the reference table below to manually set all parameters that start with SET_.

        Note

        Before Container Cloud 2.26.0 (Cluster releases 17.1.0 and 16.1.0), also set the name of the bootstrapRegion object from bootstrapregion.yaml.template for the kaas.mirantis.com/region label across all objects listed in templates/bm/baremetalhosts.yaml.template.

        Bare metal hosts template mandatory parameters

        Parameter

        Description

        Example value

        SET_MACHINE_0_IPMI_USERNAME

        The IPMI user name to access the BMC. 0

        user

        SET_MACHINE_0_IPMI_PASSWORD

        The IPMI password to access the BMC. 0

        password

        SET_MACHINE_0_MAC

        The MAC address of the first master node in the PXE network.

        ac:1f:6b:02:84:71

        SET_MACHINE_0_BMC_ADDRESS

        The IP address of the BMC endpoint for the first master node in the cluster. Must be an address from the OOB network that is accessible through the management network gateway.

        192.168.100.11

        SET_MACHINE_1_IPMI_USERNAME

        The IPMI user name to access the BMC. 0

        user

        SET_MACHINE_1_IPMI_PASSWORD

        The IPMI password to access the BMC. 0

        password

        SET_MACHINE_1_MAC

        The MAC address of the second master node in the PXE network.

        ac:1f:6b:02:84:72

        SET_MACHINE_1_BMC_ADDRESS

        The IP address of the BMC endpoint for the second master node in the cluster. Must be an address from the OOB network that is accessible through the management network gateway.

        192.168.100.12

        SET_MACHINE_2_IPMI_USERNAME

        The IPMI user name to access the BMC. 0

        user

        SET_MACHINE_2_IPMI_PASSWORD

        The IPMI password to access the BMC. 0

        password

        SET_MACHINE_2_MAC

        The MAC address of the third master node in the PXE network.

        ac:1f:6b:02:84:73

        SET_MACHINE_2_BMC_ADDRESS

        The IP address of the BMC endpoint for the third master node in the cluster. Must be an address from the OOB network that is accessible through the management network gateway.

        192.168.100.13

        0(1,2,3,4,5,6)

        The parameter requires a user name and password in plain text.

      3. Configure cluster network:

        Important

        Bootstrap V2 supports only separated PXE and LCM networks.

        • To ensure successful bootstrap, enable asymmetric routing on the interfaces of the management cluster nodes. This is required because the seed node relies on one network by default, which can potentially cause traffic asymmetry.

          In the kernelParameters section of bm/baremetalhostprofiles.yaml.template, set rp_filter to 2. This enables loose mode as defined in RFC3704.

          Example configuration of asymmetric routing
          ...
          kernelParameters:
            ...
            sysctl:
              # Enables the "Loose mode" for the "k8s-lcm" interface (management network)
              net.ipv4.conf.k8s-lcm.rp_filter: "2"
              # Enables the "Loose mode" for the "bond0" interface (PXE network)
              net.ipv4.conf.bond0.rp_filter: "2"
              ...
          

          Note

          More complicated solutions that are not described in this manual include getting rid of traffic asymmetry, for example:

          • Configure source routing on management cluster nodes.

          • Plug the seed node into the same networks as the management cluster nodes, which requires custom configuration of the seed node.

        • Update the network objects definition in templates/bm/ipam-objects.yaml.template according to the environment configuration. By default, this template implies the use of separate PXE and life-cycle management (LCM) networks.

        • Manually set all parameters that start with SET_.

        For configuration details of bond network interface for the PXE and management network, see Configure NIC bonding.

        Example of the default L2 template snippet for a management cluster:

        bonds:
          bond0:
            interfaces:
              - {{ nic 0 }}
              - {{ nic 1 }}
            parameters:
              mode: active-backup
              primary: {{ nic 0 }}
            dhcp4: false
            dhcp6: false
            addresses:
              - {{ ip "bond0:mgmt-pxe" }}
        vlans:
          k8s-lcm:
            id: SET_VLAN_ID
            link: bond0
            addresses:
              - {{ ip "k8s-lcm:kaas-mgmt" }}
            nameservers:
              addresses: {{ nameservers_from_subnet "kaas-mgmt" }}
            routes:
              - to: 0.0.0.0/0
                via: {{ gateway_from_subnet "kaas-mgmt" }}
        

        In this example, the following configuration applies:

        • A bond of two NIC interfaces

        • A static address in the PXE network set on the bond

        • An isolated L2 segment for the LCM network is configured using the k8s-lcm VLAN with the static address in the LCM network

        • The default gateway address is in the LCM network

        For general concepts of configuring separate PXE and LCM networks for a management cluster, see Separate PXE and management networks. For the latest object templates and variable names to use, see the following tables.

        Network parameters mapping overview

        Deployment file name

        Parameters list to update manually

        ipam-objects.yaml.template

        • SET_LB_HOST

        • SET_MGMT_ADDR_RANGE

        • SET_MGMT_CIDR

        • SET_MGMT_DNS

        • SET_MGMT_NW_GW

        • SET_MGMT_SVC_POOL

        • SET_PXE_ADDR_POOL

        • SET_PXE_ADDR_RANGE

        • SET_PXE_CIDR

        • SET_PXE_SVC_POOL

        • SET_VLAN_ID

        bootstrap.env

        • KAAS_BM_PXE_IP

        • KAAS_BM_PXE_MASK

        • KAAS_BM_PXE_BRIDGE

        The below table contains examples of mandatory parameter values to set in templates/bm/ipam-objects.yaml.template for the network scheme that has the following networks:

        • 172.16.59.0/24 - PXE network

        • 172.16.61.0/25 - LCM network

        Mandatory network parameters of the IPAM objects template

        Parameter

        Description

        Example value

        SET_PXE_CIDR

        The IP address of the PXE network in the CIDR notation. The minimum recommended network size is 256 addresses (/24 prefix length).

        172.16.59.0/24

        SET_PXE_SVC_POOL

        The IP address range to use for endpoints of load balancers in the PXE network for the Container Cloud services: Ironic-API, DHCP server, HTTP server, and caching server. The minimum required range size is 5 addresses.

        172.16.59.6-172.16.59.15

        SET_PXE_ADDR_POOL

        The IP address range in the PXE network to use for dynamic address allocation for hosts during inspection and provisioning.

        The minimum recommended range size is 30 addresses for management cluster nodes if it is located in a separate PXE network segment. Otherwise, it depends on the number of managed cluster nodes to deploy in the same PXE network segment as the management cluster nodes.

        172.16.59.51-172.16.59.200

        SET_PXE_ADDR_RANGE

        The IP address range in the PXE network to use for static address allocation on each management cluster node. The minimum recommended range size is 6 addresses.

        172.16.59.41-172.16.59.50

        SET_MGMT_CIDR

        The IP address of the LCM network for the management cluster in the CIDR notation. If managed clusters will have their separate LCM networks, those networks must be routable to the LCM network. The minimum recommended network size is 128 addresses (/25 prefix length).

        172.16.61.0/25

        SET_MGMT_NW_GW

        The default gateway address in the LCM network. This gateway must provide access to the OOB network of the Container Cloud cluster and to the Internet to download the Mirantis artifacts.

        172.16.61.1

        SET_LB_HOST

        The IP address of the externally accessible MKE API endpoint of the cluster in the CIDR notation. This address must be within the management SET_MGMT_CIDR network but must NOT overlap with any other addresses or address ranges within this network. External load balancers are not supported.

        172.16.61.5/32

        SET_MGMT_DNS

        An external (non-Kubernetes) DNS server accessible from the LCM network.

        8.8.8.8

        SET_MGMT_ADDR_RANGE

        The IP address range that includes addresses to be allocated to bare metal hosts in the LCM network for the management cluster.

        When this network is shared with managed clusters, the size of this range limits the number of hosts that can be deployed in all clusters sharing this network.

        When this network is solely used by a management cluster, the range must include at least 6 addresses for bare metal hosts of the management cluster.

        172.16.61.30-172.16.61.40

        SET_MGMT_SVC_POOL

        The IP address range to use for the externally accessible endpoints of load balancers in the LCM network for the Container Cloud services, such as Keycloak, web UI, and so on. The minimum required range size is 19 addresses.

        172.16.61.10-172.16.61.29

        SET_VLAN_ID

        The VLAN ID used for isolation of LCM network. The bootstrap.sh process and the seed node must have routable access to the network in this VLAN.

        3975

        When using separate PXE and LCM networks, the management cluster services are exposed in different networks using two separate MetalLB address pools:

        • Services exposed through the PXE network are as follows:

          • Ironic API as a bare metal provisioning server

          • HTTP server that provides images for network boot and server provisioning

          • Caching server for accessing the Container Cloud artifacts deployed on hosts

        • Services exposed through the LCM network are all other Container Cloud services, such as Keycloak, web UI, and so on.

        The default MetalLB configuration described in the MetalLBConfigTemplate object template of templates/bm/ipam-objects.yaml.template uses two separate MetalLB address pools. Also, it uses the interfaces selector in its l2Advertisements template.

        Caution

        When you change the L2Template object template in templates/bm/ipam-objects.yaml.template, ensure that interfaces listed in the interfaces field of the MetalLBConfigTemplate.spec.templates.l2Advertisements section match those used in your L2Template. For details about the interfaces selector, see API Reference: MetalLBConfigTemplate spec.

        See Configure MetalLB for details on MetalLB configuration.

      4. In cluster.yaml.template, update the cluster-related settings to fit your deployment.

      5. Optional. Enable WireGuard for traffic encryption on the Kubernetes workloads network.

        WireGuard configuration
        1. Ensure that the Calico MTU size is at least 60 bytes smaller than the interface MTU size of the workload network. IPv4 WireGuard uses a 60-byte header. For details, see Set the MTU size for Calico.

        2. In templates/bm/cluster.yaml.template, enable WireGuard by adding the secureOverlay parameter:

          spec:
            ...
            providerSpec:
              value:
                ...
                secureOverlay: true
          

          Caution

          Changing this parameter on a running cluster causes a downtime that can vary depending on the cluster size.

        For more details about WireGuard, see Calico documentation: Encrypt in-cluster pod traffic.

      OpenStack

      Adjust the templates/cluster.yaml.template parameters to suit your deployment:

      1. In the spec::providerSpec::value section, add the mandatory ExternalNetworkID parameter that is the ID of an external OpenStack network. It is required to have public Internet access to virtual machines.

      2. In the spec::clusterNetwork::services section, add the corresponding values for cidrBlocks.

      3. Configure other parameters as required.

      vSphere
      1. Configure MetalLB parameters:

        1. Open the required configuration file for editing:

          Open templates/vsphere/metallbconfig.yaml.template. For a detailed MetalLBConfig object description, see API Reference: MetalLBConfig resource.

          Open templates/vsphere/cluster.yaml.template.

        2. Add SET_VSPHERE_METALLB_RANGE that is the MetalLB range of IP addresses to assign to load balancers for Kubernetes Services.

          Note

          To obtain the VSPHERE_METALLB_RANGE parameter for the selected vSphere network, contact your vSphere administrator who provides you with the IP ranges dedicated to your environment.

      2. Modify templates/vsphere/cluster.yaml.template:

        vSphere cluster network parameters
        1. Modify the following required network parameters:

          Required parameters

          Parameter

          Description

          SET_LB_HOST

          IP address from the provided vSphere network for Kubernetes API load balancer (Keepalived VIP).

          SET_VSPHERE_DATASTORE

          Name of the vSphere datastore. You can use different datastores for vSphere Cluster API and vSphere Cloud Provider.

          SET_VSPHERE_MACHINES_FOLDER

          Path to a folder where the cluster machines metadata will be stored.

          SET_VSPHERE_NETWORK_PATH

          Path to a network for cluster machines.

          SET_VSPHERE_RESOURCE_POOL_PATH

          Path to a resource pool in which VMs will be created.

          Note

          To obtain the LB_HOST parameter for the selected vSphere network, contact your vSphere administrator who provides you with the IP ranges dedicated to your environment.

          Modify other parameters if required. For example, add the corresponding values for cidrBlocks in the spec::clusterNetwork::services section.

        2. For either DHCP or non-DHCP vSphere network:

          1. Determine the vSphere network parameters as described in VMware vSphere network objects and IPAM recommendations.

          2. Provide the following additional parameters for a proper network setup on machines using embedded IP address management (IPAM) in templates/vsphere/cluster.yaml.template:

            Note

            To obtain IPAM parameters for the selected vSphere network, contact your vSphere administrator who provides you with IP ranges dedicated to your environment only.

            vSphere configuration data

            Parameter

            Description

            ipamEnabled

            Enables IPAM. Recommended value is true for either DHCP or non-DHCP networks.

            SET_VSPHERE_NETWORK_CIDR

            CIDR of the provided vSphere network. For example, 10.20.0.0/16.

            SET_VSPHERE_NETWORK_GATEWAY

            Gateway of the provided vSphere network.

            SET_VSPHERE_CIDR_INCLUDE_RANGES

            IP range for the cluster machines. Specify the range of the provided CIDR. For example, 10.20.0.100-10.20.0.200. If the DHCP network is used, this range must not intersect with the DHCP range of the network.

            SET_VSPHERE_CIDR_EXCLUDE_RANGES

            Optional. IP ranges to be excluded from being assigned to the cluster machines. The MetalLB range and SET_LB_HOST should not intersect with the addresses for IPAM. For example, 10.20.0.150-10.20.0.170.

            SET_VSPHERE_NETWORK_NAMESERVERS

            List of nameservers for the provided vSphere network.

      3. For RHEL deployments, fill out templates/vsphere/rhellicenses.yaml.template.

        RHEL license configuration

        Use one of the following set of parameters for RHEL machines subscription:

        • The user name and password of your RedHat Customer Portal account associated with your RHEL license for Virtual Datacenters.

          Optionally, provide the subscription allocation pools to use for the RHEL subscription activation. If not needed, remove the poolIDs field for subscription-manager to automatically select the licenses for machines.

          For example:

          spec:
            username: <username>
            password:
              value: <password>
            poolIDs:
            - <pool1>
            - <pool2>
          
        • The activation key and organization ID associated with your RedHat account with RHEL license for Virtual Datacenters. The activation key can be created by the organization administrator on the RedHat Customer Portal.

          If you use the RedHat Satellite server for management of your RHEL infrastructure, you can provide a pre-generated activation key from that server. In this case:

          • Provide the URL to the RedHat Satellite RPM for installation of the CA certificate that belongs to that server.

          • Configure squid-proxy on the management cluster to allow access to your Satellite server. For details, see Configure squid-proxy.

          For example:

          spec:
            activationKey:
              value: <activation key>
            orgID: "<organization ID>"
            rpmUrl: <rpm url>
          

          Caution

          For RHEL, verify mirrors configuration for your activation key. For more details, see RHEL 8 mirrors configuration.

        Warning

        Provide only one set of parameters. Mixing the parameters from different activation methods will cause deployment failure.

        Warning

        The kubectl apply command automatically saves the applied data as plain text into the kubectl.kubernetes.io/last-applied-configuration annotation of the corresponding object. This may result in revealing sensitive data in this annotation when creating or modifying the object.

        Therefore, do not use kubectl apply on this object. Use kubectl create, kubectl patch, or kubectl edit instead.

        If you used kubectl apply on this object, you can remove the kubectl.kubernetes.io/last-applied-configuration annotation from the object using kubectl edit.

      4. Skip this step if you already have a custom image with a vSphere VM template to use for bootstrap.

        In templates/vsphere/vspherevmtemplate.yaml.template, set the following mandatory parameters:

        spec:
          packerImageOSName: SET_OS_NAME
          packerImageOSVersion: SET_OS_VERSION
          packerISOImage: SET_ISO_IMAGE
          vsphereCredentialsName: default/cloud-config
          vsphereClusterName: SET_VSPHERE_CLUSTER_NAME
          vsphereNetwork: SET_VSPHERE_NETWORK_PATH
          vsphereDatastore: SET_VSPHERE_DATASTORE_PATH
          vsphereFolder: SET_VSPHERE_FOLDER_PATH
          vsphereResourcePool: SET_VSPHERE_RESOURCE_POOL_PATH
        

        For the parameters description, refer to VsphereVMTemplate configuration. You can also configure optional parameters if required.

        Caution

        For the vsphereCredentialsName and proxyName fields, use names of the corresponding objects previously created using this procedure.

        For the rhelLicenseName field, make sure to create the corresponding RHEL license before proceeding to the next step.

    3. Configure StackLight. For parameters description, see StackLight configuration parameters.

    4. Optional. Configure additional cluster settings as described in Configure optional cluster settings.

  12. Apply configuration for machines using machines.yaml.template.

    Configuration of machines.yaml.template
    1. Add the following mandatory machine labels:

      labels:
        kaas.mirantis.com/provider: <providerName>
        cluster.sigs.k8s.io/cluster-name: <clusterName>
        kaas.mirantis.com/region: <regionName>
        cluster.sigs.k8s.io/control-plane: "true"
      

      Note

      The kaas.mirantis.com/region label is removed from all Container Cloud objects in 2.26.0 (Cluster releases 17.1.0 and 16.1.0). Therefore, do not add the label starting these releases. On existing clusters updated to these releases, or if manually added, this label will be ignored by Container Cloud.

    2. Configure the provider-specific settings:

      Bare metal

      Inspect the machines.yaml.template and adjust spec and labels of each entry according to your deployment. Adjust spec.providerSpec.value.hostSelector values to match BareMetalHost corresponding to each machine. For details, see API Reference: Bare metal Machine spec.

      OpenStack
      1. In templates/machines.yaml.template, modify the spec:providerSpec:value section for 3 control plane nodes marked with the cluster.sigs.k8s.io/control-plane label by substituting the flavor and image parameters with the corresponding values of the control plane nodes in the related OpenStack cluster. For example:

        spec: &cp_spec
          providerSpec:
            value:
              apiVersion: "openstackproviderconfig.k8s.io/v1alpha1"
              kind: "OpenstackMachineProviderSpec"
              flavor: kaas.minimal
              image: bionic-server-cloudimg-amd64-20190612
        

        Note

        The flavor parameter value provided in the example above is cloud-specific and must meet the Container Cloud requirements.

      2. Optional. Available as TechPreview. To boot cluster machines from a block storage volume, define the following parameter in the spec:providerSpec section of templates/machines.yaml.template:

        bootFromVolume:
          enabled: true
          volumeSize: 120
        

        Note

        The minimal storage requirement is 120 GB per node. For details, see Requirements for an OpenStack-based cluster.

        To boot the Bastion node from a volume, add the same parameter to templates/cluster.yaml.template in the spec:providerSpec section for Bastion. The default amount of storage 80 is enough.

      Also, modify other parameters as required.

      vSphere

      In templates/vsphere/machines.yaml.template, define the following parameters:

      • rhelLicense

        RHEL license name defined in rhellicenses.yaml.template, defaults to kaas-mgmt-rhel-license. Remove or comment out this parameter for Ubuntu deployments.

      • diskGiB

        Disk size in GiB for machines that must match the disk size of the VM template. You can leave this parameter commented to use the disk size of the VM template. The minimum requirement is 120 GiB.

      • template

        Path to the VM template prepared in the previous step.

      Sample template:

      spec:
        providerSpec:
          value:
            apiVersion: vsphere.cluster.k8s.io/v1alpha1
            kind: VsphereMachineProviderSpec
            rhelLicense: <rhelLicenseName>
            numCPUs: 8
            memoryMiB: 32768
            # diskGiB: 120
            template: <vSphereVMTemplatePath>
      

      Also, modify other parameters if required.

  13. For the bare metal provider, monitor the inspecting process of the baremetal hosts and wait until all hosts are in the available state:

    kubectl get bmh -o go-template='{{- range .items -}} {{.status.provisioning.state}}{{"\n"}} {{- end -}}'
    

    Example of system response:

    available
    available
    available
    
  14. Monitor the BootstrapRegion object status and wait until it is ready.

    kubectl get bootstrapregions -o go-template='{{(index .items 0).status.ready}}{{"\n"}}'
    

    To obtain more granular status details, monitor status.conditions:

    kubectl get bootstrapregions -o go-template='{{(index .items 0).status.conditions}}{{"\n"}}'
    

    For a more convenient system response, consider using dedicated tools such as jq or yq and adjust the -o flag to output in json or yaml format accordingly.

    Note

    For the bare metal provider, before Container Cloud 2.26.0, the BareMetalObjectReferences condition is not mandatory and may remain in the not ready state with no effect on the BootstrapRegion object. Since Container Cloud 2.26.0, this condition is mandatory.

  15. Change the directory to /kaas-bootstrap/.

  16. Approve the BootstrapRegion object to start the cluster deployment:

    ./container-cloud bootstrap approve all
    
    ./container-cloud bootstrap approve <bootstrapRegionName>
    

    Caution

    Once you approve the BootstrapRegion object, no cluster or machine modification is allowed.

    Warning

    For the bare metal provider, do not manually restart or power off any of the bare metal hosts during the bootstrap process.

  17. Monitor the deployment progress. For deployment stages description, see Overview of the deployment workflow.

  18. Verify that network addresses used on your clusters do not overlap with the following default MKE network addresses for Swarm and MCR:

    • 10.0.0.0/16 is used for Swarm networks. IP addresses from this network are virtual.

    • 10.99.0.0/16 is used for MCR networks. IP addresses from this network are allocated on hosts.

    Verification of Swarm and MCR network addresses

    To verify Swarm and MCR network addresses, run on any master node:

    docker info
    

    Example of system response:

    Server:
     ...
     Swarm:
      ...
      Default Address Pool: 10.0.0.0/16
      SubnetSize: 24
      ...
     Default Address Pools:
       Base: 10.99.0.0/16, Size: 20
     ...
    

    Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress network is created by default and occupies the 10.0.0.0/24 address block. Also, three MCR networks are created by default and occupy three address blocks: 10.99.0.0/20, 10.99.16.0/20, 10.99.32.0/20.

    To verify the actual networks state and addresses in use, run:

    docker network ls
    docker network inspect <networkName>
    
  19. Optional for the bare metal provider. If you plan to use multiple L2 segments for provisioning of managed cluster nodes, consider the requirements specified in Configure multiple DHCP ranges using Subnet resources.