Bootstrap a management cluster

Bootstrap a management cluster

Caution

This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For details about the Mirantis Technology Preview support scope, see the Preface section of this guide.

Note

In scope of Technology Preview support for the VMWare vSphere cloud provider, StackLight deployed on a management cluster has limitations related to alerts and Grafana dashboards. For details, see StackLight support for VMWare vSphere.

After you complete the prerequisite steps described in Prerequisites, proceed with bootstrapping your VMWare vSphere-based Mirantis Container Cloud management cluster.

To bootstrap a vSphere-based management cluster:

  1. Log in to the bootstrap node running Ubuntu 18.04 that is configured as described in Prerequisites.

  2. Download and run the Container Cloud bootstrap script:

    wget https://binary.mirantis.com/releases/get_container_cloud.sh
    chmod 0755 get_container_cloud.sh
    ./get_container_cloud.sh
    
  3. Change the directory to the kaas-bootstrap folder created by the get_container_cloud.sh script.

  4. Obtain your license file that will be required during the bootstrap. See step 3 in Getting Started with Mirantis Container Cloud.

  5. Save the license file as mirantis.lic under the kaas-bootstrap directory.

  6. Prepare your RHEL license and deployment templates:

    1. Fill out templates/vsphere/rhellicenses.yaml.template using one of the following set of parameters for RHEL machines subscription:

      • The user name and password of your RedHat Customer Portal account associated with your RHEL license for Virtual Datacenters.

        Optionally, provide the subscription allocation pools to use for the RHEL subscriptions activation. If not needed, remove the poolIDs field for subscription-manager to automatically select the licenses for machines.

        For example:

        spec:
          username: <username>
          password:
            value: <password>
          poolIDs:
          - <pool1>
          - <pool2>
        
      • Available since 2.6.0 The activation key and organization ID associated with your RedHat account with RHEL license for Virtual Datacenters. The activation key can be created by the organization administrator on RedHat Customer Portal.

        If you use the RedHat Satellite server for management of your RHEL infrastructure, you can provide a pre-generated activation key from that server. In this case, also provide the URL to the RedHat Satellite RPM for installation of the CA certificate that belongs to that server.

        For example:

        spec:
          activationKey:
            value: <activation key>
          orgID: "<organization ID>"
          rpmUrl: <rpm url>
        

      Caution

      Provide only one set of parameters. Mixing of parameters from different activation methods will cause deployment failure.

    2. Modify templates/vsphere/vsphere-config.yaml.template:

      vSphere configuration data

      Parameter

      Description

      SET_VSPHERE_SERVER

      IP address or FQDN of the vCenter Server.

      SET_VSPHERE_SERVER_PORT

      Port of the vCenter Server. For example, port: "8443". Leave empty to use 443 by default.

      SET_VSPHERE_DATACENTER

      vSphere data center name.

      SET_VSPHERE_SERVER_INSECURE

      Flag that controls validation of the vSphere Server certificate. Must be true or false.

      SET_VSPHERE_CAPI_PROVIDER_USERNAME

      vSphere Cluster API provider user name. For details, see Prepare the VMWare deployment user setup and permissions.

      SET_VSPHERE_CAPI_PROVIDER_PASSWORD

      vSphere Cluster API provider user password.

      SET_VSPHERE_CLOUD_PROVIDER_USERNAME

      vSphere Cloud Provider deployment user name. For details, see Prepare the VMWare deployment user setup and permissions.

      SET_VSPHERE_CLOUD_PROVIDER_PASSWORD

      vSphere Cloud Provider deployment user password.

    3. Modify the templates/vsphere/cluster.yaml.template parameters to fit your deployment. For example, add the corresponding values for cidrBlocks in the spec::clusterNetwork::services section.

      Required parameters:

      vSphere configuration data

      Parameter

      Description

      SET_LB_HOST Available since 2.6.0

      IP address from the provided vSphere network for load balancer (Keepalived).

      SET_VSPHERE_DATASTORE

      Name of the vSphere datastore. You can use different datastores for vSphere Cluster API and vSphere Cloud Provider.

      SET_VSPHERE_MACHINES_FOLDER

      Path to a folder where the cluster machines metadata will be stored.

      SET_VSPHERE_NETWORK_PATH

      Path to a network for cluster machines.

      SET_VSPHERE_RESOURCE_POOL_PATH

      Path to a resource pool in which VMs will be created.

      Note

      The passwordSalt and passwordHash values for the IAM roles are automatically re-generated during the IAM configuration.

    4. Starting from Container Cloud 2.6.0, if a vSphere network has no DHCP server:

      1. In templates/vsphere/cluster.yaml.template, provide the following additional parameters for a proper network setup on machines using embedded IP address management (IPAM):

        vSphere configuration data

        Parameter

        Description

        ipamEnabled

        Enables IPAM. Set to true for networks without DHCP.

        SET_VSPHERE_NETWORK_CIDR

        CIDR of the provided vSphere network. For example, 10.20.0.0/16.

        SET_VSPHERE_NETWORK_GATEWAY

        Gateway of the provided vSphere network.

        SET_VSPHERE_CIDR_INCLUDE_RANGES

        Optional. IP range for the cluster machines. Specify the range of the provided CIDR. For example, 10.20.0.100-10.20.0.200.

        SET_VSPHERE_CIDR_EXCLUDE_RANGES

        Optional. IP ranges to be excluded from being assigned to the cluster machines. The MetalLB range and SET_LB_HOST should not intersect with the addresses for IPAM. For example, 10.20.0.150-10.20.0.170.

        SET_VSPHERE_NETWORK_NAMESERVERS

        List of nameservers for the provided vSphere network.

      2. In kaas-bootstrap/releases/kaas/2.6.0.yaml, change the release-controller version from 1.18.1 to 1.18.3:

        - name: release-controller
          version: 1.18.3
          chart: kaas-release/release-controller
          namespace: kaas
          values:
            image:
              tag: 1.18.3
        

        Caution

        The step above applies only to the Container Cloud 2.6.0 deployments.

  7. In bootstrap.env, add the following environment variables:

    Note

    For the Keycloak and IAM services variables, assign IP addresses from the end of the provided MetalLB range. For example, if the MetalLB range is 10.20.0.30-10.20.0.50, select 10.20.0.48 and 10.20.0.49 as IPs for KeyCloak and IAM.

    vSphere environment data

    Parameter

    Description

    KAAS_VSPHERE_ENABLED

    Set to true. Enables the vSphere provider deployment in Container Cloud.

    KEYCLOAK_FLOATING_IP

    IP address for Keycloak from the end of the MetalLB range.

    IAM_FLOATING_IP

    IP address for IAM from the end of MetalLB range.

  8. Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.

    In templates/vsphere/cluster.yaml.template, add the ntp:servers section with the list of required servers names:

    spec:
      ...
      providerSpec:
        value:
          kaas:
          ...
            regional:
              - helmReleases:
                - name: vsphere-provider
                  values:
                    config:
                      lcm:
                        ...
                        ntp:
                          servers:
                          - 0.pool.ntp.org
                          ...
                provider: vsphere
                ...
    
  9. Prepare the OVF template as described in Prepare the OVF template.

  10. In templates/vsphere/machines.yaml.template:

    • Define SSH_USER_NAME. The default SSH user name is cloud-user.

    • Define SET_VSPHERE_TEMPLATE_PATH prepared in the previous step.

    • Modify other parameters as required.

    spec:
      providerSpec:
        value:
          apiVersion: vsphere.cluster.k8s.io/v1alpha1
          kind: VsphereMachineProviderSpec
          sshUserName: <SSH_USER_NAME>
          rhelLicense: kaas-mgmt-rhel-license
          network:
            devices:
            - dhcp4: true
              dhcp6: false
          template: <SET_VSPHERE_TEMPLATE_PATH>
    
  11. Available since 2.5.0 Optional. If you require all Internet access to go through a proxy server, in bootstrap.env, add the following environment variables to bootstrap the management and regional cluster using proxy:

    • HTTP_PROXY

    • HTTPS_PROXY

    • NO_PROXY

    Example snippet:

    export HTTP_PROXY=http://proxy.example.com:3128
    export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
    export NO_PROXY=172.18.10.0,registry.internal.lan
    

    The following variables formats are accepted:

    Proxy configuration data

    Variable

    Format

    • HTTP_PROXY

    • HTTPS_PROXY

    • http://proxy.example.com:port - for anonymous access

    • http://user:password@proxy.example.com:port - for restricted access

    • NO_PROXY

    Comma-separated list of IP addresses or domain names

    For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.

  12. Optional. Skip this step to use the default password password in the Container Cloud web UI.

    Caution

    For security reasons, Mirantis strongly recommends changing the default password on publicly accessible Container Cloud deployments.

    Configure the IAM parameters:

    1. Create hashed passwords for every IAM role: reader, writer, and operator for bare metal deployments:

      ./bin/hash-generate -i 27500
      

      The hash-generate utility requests you to enter a password and outputs the parameters required for the next step. Save the password that you enter in a secure location. This password will be used to access the Container Cloud web UI with a specific IAM role.

      Example of system response:

      passwordSalt: 6ibPZdUfQK8PsOpSmyVJnA==
      passwordHash: 23W1l65FBdI3NL7LMiUQG9Cu62bWLTqIsOgdW8xNsqw=
      passwordHashAlgorithm: pbkdf2-sha256
      passwordHashIterations: 27500
      

      Run the tool several times to generate hashed passwords for every IAM role.

    2. Open templates/cluster.yaml.template for editing.

    3. In the initUsers section, add the following parameters for each IAM role that you generated in the previous step:

      • passwordSalt - base64-encoded randomly generated sequence of bytes.

      • passwordHash - base64-encoded password hash generated using passwordHashAlgorithm with passwordHashIterations. Supported algorithms include pbkdf2-sha256 and pbkdf-sha512.

  13. Optional. Configure external identity provider for IAM.

  14. Run the bootstrap script:

    ./bootstrap.sh all
    
  15. When the bootstrap is complete, collect and save the following management cluster details in a secure location:

    • The kubeconfig file located in the same directory as the bootstrap script. This file contains the admin credentials for the management cluster.

    • The private ssh_key for access to the management cluster nodes that is located in the same directory as the bootstrap script.

    • The URL and credentials for the Container Cloud web UI. The system outputs these details when the bootstrap completes.

    • The StackLight endpoints. For details, see Operations Guide: Access StackLight web UIs.

    • The Keycloak URL that the system outputs when the bootstrap completes. The admin password for Keycloak is located in kaas-bootstrap/passwords.yml along with other IAM passwords.

    Note

    When the bootstrap is complete, the bootstrap cluster resources are freed up.

Now, you can proceed with operating your management cluster using the Container Cloud web UI and deploying managed clusters as described in Create a VMWare vSphere-based managed cluster.