Example of a complete L2 templates configuration for cluster creation

The following example contains all required objects of an advanced network and host configuration for a baremetal-based managed cluster.

The procedure below contains:

  • Various .yaml objects to be applied with a managed cluster kubeconfig

  • Useful comments inside the .yaml example files

  • Example hardware and configuration data, such as network, disk, auth, that must be updated accordingly to fit your cluster configuration

  • Example templates, such as l2template and baremetalhostprofline, that illustrate how to implement a specific configuration

Caution

The exemplary configuration described below is not production ready and is provided for illustration purposes only.

For illustration purposes, all files provided in this exemplary procedure are named by the Kubernetes object types:

managed-ns_BareMetalHost_cz7700-managed-cluster-control-noefi.yaml
managed-ns_BareMetalHost_cz7741-managed-cluster-control-noefi.yaml
managed-ns_BareMetalHost_cz7743-managed-cluster-control-noefi.yaml
managed-ns_BareMetalHost_cz812-managed-cluster-storage-worker-noefi.yaml
managed-ns_BareMetalHost_cz813-managed-cluster-storage-worker-noefi.yaml
managed-ns_BareMetalHost_cz814-managed-cluster-storage-worker-noefi.yaml
managed-ns_BareMetalHost_cz815-managed-cluster-worker-noefi.yaml
managed-ns_BareMetalHostProfile_bmhp-cluster-default.yaml
managed-ns_BareMetalHostProfile_worker-storage1.yaml
managed-ns_Cluster_managed-cluster.yaml
managed-ns_KaaSCephCluster_ceph-cluster-managed-cluster.yaml
managed-ns_L2Template_bm-1490-template-controls-netplan-cz7700-pxebond.yaml
managed-ns_L2Template_bm-1490-template-controls-netplan.yaml
managed-ns_L2Template_bm-1490-template-workers-netplan.yaml
managed-ns_Machine_cz7700-managed-cluster-control-noefi-.yaml
managed-ns_Machine_cz7741-managed-cluster-control-noefi-.yaml
managed-ns_Machine_cz7743-managed-cluster-control-noefi-.yaml
managed-ns_Machine_cz812-managed-cluster-storage-worker-noefi-.yaml
managed-ns_Machine_cz813-managed-cluster-storage-worker-noefi-.yaml
managed-ns_Machine_cz814-managed-cluster-storage-worker-noefi-.yaml
managed-ns_Machine_cz815-managed-cluster-worker-noefi-.yaml
managed-ns_PublicKey_managed-cluster-key.yaml
managed-ns_Secret_cz7700-cred.yaml
managed-ns_Secret_cz7741-cred.yaml
managed-ns_Secret_cz7743-cred.yaml
managed-ns_Secret_cz812-cred.yaml
managed-ns_Secret_cz813-cred.yaml
managed-ns_Secret_cz814-cred.yaml
managed-ns_Secret_cz815-cred.yaml
managed-ns_Subnet_lcm-nw.yaml
managed-ns_Subnet_metallb-public-for-managed.yaml
managed-ns_Subnet_metallb-public-for-extiface.yaml
managed-ns_Subnet_storage-backend.yaml
managed-ns_Subnet_storage-frontend.yaml
default_Namespace_managed-ns.yaml

Caution

The procedure below assumes that you apply each new .yaml file using kubectl create -f <file_name.yaml>.

To create an example configuration for a managed cluster creation:

  1. Verify that you have configured the following items:

    1. All bmh nodes for PXE boot as described in Add a bare metal host using CLI

    2. All physical NICs of the bmh nodes

    3. All required physical subnets and routing

  2. Create an empty .yaml file with the namespace object:

    apiVersion: v1
    kind: Namespace
    metadata:
      name: managed-ns
    
  3. Create the required number of .yaml files with the Secret objects for each bmh node with unique name and authentication data. The following example contains one secret:

    apiVersion: v1
    data:
      password: YWRtaW4=
      username: ZW5naW5lZXI=
    kind: Secret
    metadata:
      labels:
        kaas.mirantis.com/credentials: 'true'
        kaas.mirantis.com/provider: baremetal
        kaas.mirantis.com/region: region-one
      name: cz815-cred
      namespace: managed-ns
    
  4. Create a set of files with the bmh nodes configuration:

  5. Verify that the inspecting phase has started:

    KUBECONFIG=kubeconfig kubectl -n managed-ns get bmh -o wide
    

    Example of system response:

    NAME                                       STATUS STATE CONSUMER BMC           BOOTMODE ONLINE ERROR REGION
    cz7700-managed-cluster-control-noefi       OK     inspecting     192.168.1.12  legacy   true         region-one
    cz7741-managed-cluster-control-noefi       OK     inspecting     192.168.1.76  legacy   true         region-one
    cz7743-managed-cluster-control-noefi       OK     inspecting     192.168.1.78  legacy   true         region-one
    cz812-managed-cluster-storage-worker-noefi OK     inspecting     192.168.1.182 legacy   true         region-one
    

    Wait for inspection to complete. Usually, it takes up to 15 minutes.

  6. Collect the bmh hardware information to create the l2template and bmh objects:

    KUBECONFIG=kubeconfig kubectl -n managed-ns get bmh -o wide
    

    Example of system response:

    NAME                                       STATUS STATE CONSUMER BMC           BOOTMODE ONLINE ERROR REGION
    cz7700-managed-cluster-control-noefi       OK     ready          192.168.1.12  legacy   true         region-one
    cz7741-managed-cluster-control-noefi       OK     ready          192.168.1.76  legacy   true         region-one
    cz7743-managed-cluster-control-noefi       OK     ready          192.168.1.78  legacy   true         region-one
    cz812-managed-cluster-storage-worker-noefi OK     ready          192.168.1.182 legacy   true         region-one
    
    KUBECONFIG=kubeconfig kubectl -n managed-ns get bmh cz7700-managed-cluster-control-noefi -o yaml | less
    

    Example of system response:

    ..
    nics:
    - ip: ""
      mac: 0c:c4:7a:1d:f4:a6
      model: 0x8086 0x10fb
      # discovered interfaces
      name: ens4f0
      pxe: false
      # temporary PXE address discovered from baremetal-mgmt
    - ip: 172.16.170.30
      mac: 0c:c4:7a:34:52:04
      model: 0x8086 0x1521
      name: enp9s0f0
      pxe: true
      # duplicates temporary PXE address discovered from baremetal-mgmt
      # since we have fallback-bond configured on host
    - ip: 172.16.170.33
      mac: 0c:c4:7a:34:52:05
      model: 0x8086 0x1521
      # discovered interfaces
      name: enp9s0f1
      pxe: false
    ...
    storage:
    - by_path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1
      model: Samsung SSD 850
      name: /dev/sda
      rotational: false
      sizeBytes: 500107862016
    - by_path: /dev/disk/by-path/pci-0000:00:1f.2-ata-2
      model: Samsung SSD 850
      name: /dev/sdb
      rotational: false
      sizeBytes: 500107862016
    ....
    
  7. Create bare metal host profiles:

  8. Create the L2Template objects:

  9. Create the Subnet objects:

  10. Create the PublicKey object for a managed cluster connection. For details, see Public key resources.

  11. Create the Cluster object. For details, see Cluster resources.

  12. Create the Machine objects linked to each bmh node. For details, see Machine resources.

  13. Verify that the bmh nodes are in the provisioning state:

    KUBECONFIG=kubectl kubectl -n managed-ns get bmh  -o wide
    

    Example of system response:

    NAME                                  STATUS STATE          CONSUMER                                    BMC          BOOTMODE   ONLINE  ERROR REGION
    cz7700-managed-cluster-control-noefi  OK     provisioning   cz7700-managed-cluster-control-noefi-8bkqw  192.168.1.12  legacy     true          region-one
    cz7741-managed-cluster-control-noefi  OK     provisioning   cz7741-managed-cluster-control-noefi-42tp2  192.168.1.76  legacy     true          region-one
    cz7743-managed-cluster-control-noefi  OK     provisioning   cz7743-managed-cluster-control-noefi-8cwpw  192.168.1.78  legacy     true          region-one
    ...
    

    Wait until all bmh nodes are in the provisioned state.

  14. Verify that the lcmmachine phase has started:

    KUBECONFIG=kubeconfig kubectl -n managed-ns get lcmmachines  -o wide
    

    Example of system response:

    NAME                                       CLUSTERNAME       TYPE      STATE   INTERNALIP     HOSTNAME                                         AGENTVERSION
    cz7700-managed-cluster-control-noefi-8bkqw managed-cluster   control   Deploy  172.16.170.153 kaas-node-803721b4-227c-4675-acc5-15ff9d3cfde2   v0.2.0-349-g4870b7f5
    cz7741-managed-cluster-control-noefi-42tp2 managed-cluster   control   Prepare 172.16.170.152 kaas-node-6b8f0d51-4c5e-43c5-ac53-a95988b1a526   v0.2.0-349-g4870b7f5
    cz7743-managed-cluster-control-noefi-8cwpw managed-cluster   control   Prepare 172.16.170.151 kaas-node-e9b7447d-5010-439b-8c95-3598518f8e0a   v0.2.0-349-g4870b7f5
    ...
    
  15. Verify that the lcmmachine phase is complete and the Kubernetes cluster is created:

    KUBECONFIG=kubeconfig kubectl -n managed-ns get lcmmachines  -o wide
    

    Example of system response:

    NAME                                       CLUSTERNAME       TYPE     STATE  INTERNALIP      HOSTNAME                                        AGENTVERSION
    cz7700-managed-cluster-control-noefi-8bkqw  managed-cluster  control  Ready  172.16.170.153  kaas-node-803721b4-227c-4675-acc5-15ff9d3cfde2  v0.2.0-349-g4870b7f5
    cz7741-managed-cluster-control-noefi-42tp2  managed-cluster  control  Ready  172.16.170.152  kaas-node-6b8f0d51-4c5e-43c5-ac53-a95988b1a526  v0.2.0-349-g4870b7f5
    cz7743-managed-cluster-control-noefi-8cwpw  managed-cluster  control  Ready  172.16.170.151  kaas-node-e9b7447d-5010-439b-8c95-3598518f8e0a  v0.2.0-349-g4870b7f5
    ...
    
  16. Create the KaaSCephCluster object:

  17. Obtain kubeconfig of the newly created managed cluster:

    KUBECONFIG=kubeconfig kubectl -n managed-ns get secrets managed-cluster-kubeconfig -o jsonpath='{.data.admin\.conf}' | base64 -d |  tee managed.kubeconfig
    
  18. Verify the status of the Ceph cluster in your managed cluster:

    KUBECONFIG=managed.kubeconfig kubectl -n rook-ceph exec -it $(KUBECONFIG=managed.kubeconfig kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph -s
    

    Example of system response:

    cluster:
      id:     e75c6abd-c5d5-4ae8-af17-4711354ff8ef
      health: HEALTH_OK
    services:
      mon: 3 daemons, quorum a,b,c (age 55m)
      mgr: a(active, since 55m)
      osd: 3 osds: 3 up (since 54m), 3 in (since 54m)
    data:
      pools:   1 pools, 32 pgs
      objects: 273 objects, 555 MiB
      usage:   4.0 GiB used, 1.6 TiB / 1.6 TiB avail
      pgs:     32 active+clean
    io:
      client:   51 KiB/s wr, 0 op/s rd, 4 op/s wr