Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Now, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Example of a complete template configuration for cluster creation¶
The following example contains all required objects of an advanced network and host configuration for a MOSK cluster.
The procedure below contains:
Various
.yamlobjects to be applied with a MOSK clusterkubeconfigUseful comments inside the
.yamlexample filesExample hardware and configuration data, such as
network,disk,auth, that must be updated accordingly to fit your cluster configurationExample templates, such as
l2templateandbaremetalhostprofline, that illustrate how to implement a specific configuration
Caution
The exemplary configuration described below is not production ready and is provided for illustration purposes only.
For illustration purposes, all files provided in this exemplary procedure are named by the Kubernetes object types:
mosk-ns_BareMetalHostInventory_cz7700-mosk-cluster-control-noefi.yaml
mosk-ns_BareMetalHostInventory_cz7741-mosk-cluster-control-noefi.yaml
mosk-ns_BareMetalHostInventory_cz7743-mosk-cluster-control-noefi.yaml
mosk-ns_BareMetalHostInventory_cz812-mosk-cluster-storage-worker-noefi.yaml
mosk-ns_BareMetalHostInventory_cz813-mosk-cluster-storage-worker-noefi.yaml
mosk-ns_BareMetalHostInventory_cz814-mosk-cluster-storage-worker-noefi.yaml
mosk-ns_BareMetalHostInventory_cz815-mosk-cluster-worker-noefi.yaml
mosk-ns_BareMetalHostProfile_bmhp-cluster-default.yaml
mosk-ns_BareMetalHostProfile_worker-storage1.yaml
mosk-ns_Cluster_mosk-cluster.yaml
mosk-ns_KaaSCephCluster_ceph-cluster-mosk-cluster.yaml
mosk-ns_L2Template_bm-1490-template-controls-netplan-cz7700-pxebond.yaml
mosk-ns_L2Template_bm-1490-template-controls-netplan.yaml
mosk-ns_L2Template_bm-1490-template-workers-netplan.yaml
mosk-ns_Machine_cz7700-mosk-cluster-control-noefi-.yaml
mosk-ns_Machine_cz7741-mosk-cluster-control-noefi-.yaml
mosk-ns_Machine_cz7743-mosk-cluster-control-noefi-.yaml
mosk-ns_Machine_cz812-mosk-cluster-storage-worker-noefi-.yaml
mosk-ns_Machine_cz813-mosk-cluster-storage-worker-noefi-.yaml
mosk-ns_Machine_cz814-mosk-cluster-storage-worker-noefi-.yaml
mosk-ns_Machine_cz815-mosk-cluster-worker-noefi-.yaml
mosk-ns_PublicKey_mosk-cluster-key.yaml
mosk-ns_cz7700-cred.yaml
mosk-ns_cz7741-cred.yaml
mosk-ns_cz7743-cred.yaml
mosk-ns_cz812-cred.yaml
mosk-ns_cz813-cred.yaml
mosk-ns_cz814-cred.yaml
mosk-ns_cz815-cred.yaml
mosk-ns_Subnet_lcm-nw.yaml
mosk-ns_Subnet_api-lb.yaml
mosk-ns_Subnet_metallb-public-for-extiface.yaml
mosk-ns_Subnet_storage-backend.yaml
mosk-ns_Subnet_storage-frontend.yaml
mosk-ns_MetalLBConfig-lb-mosk.yaml
default_Namespace_mosk-ns.yaml
Caution
The procedure below presumes that you apply each new .yaml
file using kubectl create -f <file_name.yaml>.
To create an example configuration for a MOSK cluster creation:
Verify that you have configured the following items:
All
bmhnodes for PXE boot as described in Add a bare metal host using CLIAll physical NICs of the
bmhnodesAll required physical subnets and routing
Create an empty
.yamlfile with thenamespaceobject:apiVersion: v1
Create the required number of
.yamlfiles with theBareMetalHostCredentialobjects for eachbmhnode with the uniquenameand authenticationdata. The following example contains oneBareMetalHostCredentialobject:mosk-ns_cz815-cred.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostCredential metadata: name: cz815-cred namespace: mosk-ns spec: username: admin password: value: supersecret
Create a set of files with the
bmhnodes configuration:mosk-ns_BareMetalHostInventory_cz7700-mosk-cluster-control-noefi.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostInventory metadata: annotations: inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster # we will use those label, to link machine to exact bmh node kaas.mirantis.com/baremetalhost-id: cz7700 kaas.mirantis.com/provider: baremetal name: cz7700-mosk-cluster-control-noefi namespace: mosk-ns spec: bmc: address: 192.168.1.12 bmhCredentialsName: 'cz7740-cred' bootMACAddress: 0c:c4:7a:34:52:04 bootMode: legacy online: true
mosk-ns_BareMetalHostInventory_cz7741-mosk-cluster-control-noefi.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostInventory metadata: annotations: inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/baremetalhost-id: cz7741 kaas.mirantis.com/provider: baremetal name: cz7741-mosk-cluster-control-noefi namespace: mosk-ns spec: bmc: address: 192.168.1.76 bmhCredentialsName: 'cz7741-cred' bootMACAddress: 0c:c4:7a:34:92:f4 bootMode: legacy online: true
mosk-ns_BareMetalHostInventory_cz7743-mosk-cluster-control-noefi.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostInventory metadata: annotations: inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/baremetalhost-id: cz7743 kaas.mirantis.com/provider: baremetal name: cz7743-mosk-cluster-control-noefi namespace: mosk-ns spec: bmc: address: 192.168.1.78 bmhCredentialsName: 'cz7743-cred' bootMACAddress: 0c:c4:7a:34:66:fc bootMode: legacy online: true
mosk-ns_BareMetalHostInventory_cz812-mosk-cluster-storage-worker-noefi.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostInventory metadata: annotations: inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/baremetalhost-id: cz812 kaas.mirantis.com/provider: baremetal name: cz812-mosk-cluster-storage-worker-noefi namespace: mosk-ns spec: bmc: address: 192.168.1.182 bmhCredentialsName: 'cz812-cred' bootMACAddress: 0c:c4:7a:bc:ff:2e bootMode: legacy online: true
mosk-ns_BareMetalHostInventory_cz813-mosk-cluster-storage-worker-noefi.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostInventory metadata: annotations: inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/baremetalhost-id: cz813 kaas.mirantis.com/provider: baremetal name: cz813-mosk-cluster-storage-worker-noefi namespace: mosk-ns spec: bmc: address: 192.168.1.183 bmhCredentialsName: 'cz813-cred' bootMACAddress: 0c:c4:7a:bc:fe:36 bootMode: legacy online: true
mosk-ns_BareMetalHostInventory_cz814-mosk-cluster-storage-worker-noefi.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostInventory metadata: annotations: inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/baremetalhost-id: cz814 kaas.mirantis.com/provider: baremetal name: cz814-mosk-cluster-storage-worker-noefi namespace: mosk-ns spec: bmc: address: 192.168.1.184 bmhCredentialsName: 'cz814-cred' bootMACAddress: 0c:c4:7a:bc:fb:20 bootMode: legacy online: true
mosk-ns_BareMetalHostInventory_cz815-mosk-cluster-worker-noefi.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: BareMetalHostInventory metadata: annotations: inspect.metal3.io/hardwaredetails-storage-sort-term: hctl ASC, wwn ASC, by_id ASC, name ASC labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/baremetalhost-id: cz815 kaas.mirantis.com/provider: baremetal name: cz815-mosk-cluster-worker-noefi namespace: mosk-ns spec: bmc: address: 192.168.1.185 bmhCredentialsName: 'cz815-cred' bootMACAddress: 0c:c4:7a:bc:fc:3e bootMode: legacy online: true
Verify that the
inspectingphase has started:KUBECONFIG=kubeconfig kubectl -n mosk-ns get bmh -o wide
Example of system response:
NAME STATUS STATE CONSUMER BMC BOOTMODE ONLINE ERROR cz7700-mosk-cluster-control-noefi OK inspecting 192.168.1.12 legacy true cz7741-mosk-cluster-control-noefi OK inspecting 192.168.1.76 legacy true cz7743-mosk-cluster-control-noefi OK inspecting 192.168.1.78 legacy true cz812-mosk-cluster-storage-worker-noefi OK inspecting 192.168.1.182 legacy true
Wait for inspection to complete. Usually, it takes up to 15 minutes.
Collect the
bmhhardware information to create thel2templateandbmhobjects:KUBECONFIG=kubeconfig kubectl -n mosk-ns get bmh -o wide
Example of system response:
NAME STATUS STATE CONSUMER BMC BOOTMODE ONLINE ERROR cz7700-mosk-cluster-control-noefi OK available 192.168.1.12 legacy true cz7741-mosk-cluster-control-noefi OK available 192.168.1.76 legacy true cz7743-mosk-cluster-control-noefi OK available 192.168.1.78 legacy true cz812-mosk-cluster-storage-worker-noefi OK available 192.168.1.182 legacy true
KUBECONFIG=kubeconfig kubectl -n mosk-ns get bmh cz7700-mosk-cluster-control-noefi -o yaml | less
Example of system response:
.. nics: - ip: "" mac: 0c:c4:7a:1d:f4:a6 model: 0x8086 0x10fb # discovered interfaces name: ens4f0 pxe: false # temporary PXE address discovered from baremetal-mgmt - ip: 172.16.170.30 mac: 0c:c4:7a:34:52:04 model: 0x8086 0x1521 name: enp9s0f0 pxe: true # duplicates temporary PXE address discovered from baremetal-mgmt # since we have fallback-bond configured on host - ip: 172.16.170.33 mac: 0c:c4:7a:34:52:05 model: 0x8086 0x1521 # discovered interfaces name: enp9s0f1 pxe: false ... storage: - by_path: /dev/disk/by-path/pci-0000:00:1f.2-ata-1 model: Samsung SSD 850 name: /dev/sda rotational: false sizeBytes: 500107862016 - by_path: /dev/disk/by-path/pci-0000:00:1f.2-ata-2 model: Samsung SSD 850 name: /dev/sdb rotational: false sizeBytes: 500107862016 ....
Create bare metal host profiles:
mosk-ns_BareMetalHostProfile_bmhp-cluster-default.yamlapiVersion: metal3.io/v1alpha1 kind: BareMetalHostProfile metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster # This label indicates that this profile will be default in # namespaces, so machines w\o exact profile selecting will use # this template kaas.mirantis.com/defaultBMHProfile: 'true' kaas.mirantis.com/provider: baremetal name: bmhp-cluster-default namespace: mosk-ns spec: devices: - device: byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-1 minSize: 120Gi wipe: true partitions: - name: bios_grub partflags: - bios_grub size: 4Mi wipe: true - name: uefi partflags: - esp size: 200Mi wipe: true - name: config-2 size: 64Mi wipe: true - name: lvm_dummy_part size: 1Gi wipe: true - name: lvm_root_part size: 0 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-2 minSize: 30Gi wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-3 minSize: 30Gi wipe: true partitions: - name: lvm_lvp_part size: 0 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-4 wipe: true fileSystems: - fileSystem: vfat partition: config-2 - fileSystem: vfat mountPoint: /boot/efi partition: uefi - fileSystem: ext4 logicalVolume: root mountPoint: / - fileSystem: ext4 logicalVolume: lvp mountPoint: /mnt/local-volumes/ grubConfig: defaultGrubOptions: - GRUB_DISABLE_RECOVERY="true" - GRUB_PRELOAD_MODULES=lvm - GRUB_TIMEOUT=30 kernelParameters: modules: - content: 'options kvm_intel nested=1' filename: kvm_intel.conf sysctl: # For the list of options prohibited to change, refer to # https://docs.mirantis.com/mke/3.7/install/predeployment/set-up-kernel-default-protections.html fs.aio-max-nr: '1048576' fs.file-max: '9223372036854775807' fs.inotify.max_user_instances: '4096' kernel.core_uses_pid: '1' kernel.dmesg_restrict: '1' net.ipv4.conf.all.rp_filter: '0' net.ipv4.conf.default.rp_filter: '0' net.ipv4.conf.k8s-ext.rp_filter: '0' net.ipv4.conf.k8s-ext.rp_filter: '0' net.ipv4.conf.m-pub.rp_filter: '0' vm.max_map_count: '262144' logicalVolumes: - name: root size: 0 vg: lvm_root - name: lvp size: 0 vg: lvm_lvp postDeployScript: | #!/bin/bash -ex # used for test-debug only! echo "root:r00tme" | sudo chpasswd echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"' > /etc/udev/rules.d/60-ssd-scheduler.rules echo $(date) 'post_deploy_script done' >> /root/post_deploy_done preDeployScript: | #!/bin/bash -ex echo "$(date) pre_deploy_script done" >> /root/pre_deploy_done volumeGroups: - devices: - partition: lvm_root_part name: lvm_root - devices: - partition: lvm_lvp_part name: lvm_lvp - devices: - partition: lvm_dummy_part # here we can create lvm, but do not mount or format it somewhere name: lvm_forawesomeapp
mosk-ns_BareMetalHostProfile_worker-storage1.yamlapiVersion: metal3.io/v1alpha1 kind: BareMetalHostProfile metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/provider: baremetal name: worker-storage1 namespace: mosk-ns spec: devices: - device: minSize: 120Gi wipe: true partitions: - name: bios_grub partflags: - bios_grub size: 4Mi wipe: true - name: uefi partflags: - esp size: 200Mi wipe: true - name: config-2 size: 64Mi wipe: true # Create dummy partition w\o mounting - name: lvm_dummy_part size: 1Gi wipe: true - name: lvm_root_part size: 0 wipe: true - device: # Will be used for Ceph, so required to be wiped byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-1 minSize: 30Gi wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-2 minSize: 30Gi wipe: true partitions: - name: lvm_lvp_part size: 0 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-3 wipe: true - device: byPath: /dev/disk/by-path/pci-0000:00:1f.2-ata-4 minSize: 30Gi wipe: true partitions: - name: lvm_lvp_part_sdf wipe: true size: 0 fileSystems: - fileSystem: vfat partition: config-2 - fileSystem: vfat mountPoint: /boot/efi partition: uefi - fileSystem: ext4 logicalVolume: root mountPoint: / - fileSystem: ext4 logicalVolume: lvp mountPoint: /mnt/local-volumes/ grubConfig: defaultGrubOptions: - GRUB_DISABLE_RECOVERY="true" - GRUB_PRELOAD_MODULES=lvm - GRUB_TIMEOUT=30 kernelParameters: modules: - content: 'options kvm_intel nested=1' filename: kvm_intel.conf sysctl: # For the list of options prohibited to change, refer to # https://docs.mirantis.com/mke/3.6/install/predeployment/set-up-kernel-default-protections.html fs.aio-max-nr: '1048576' fs.file-max: '9223372036854775807' fs.inotify.max_user_instances: '4096' kernel.core_uses_pid: '1' kernel.dmesg_restrict: '1' net.ipv4.conf.all.rp_filter: '0' net.ipv4.conf.default.rp_filter: '0' net.ipv4.conf.k8s-ext.rp_filter: '0' net.ipv4.conf.k8s-ext.rp_filter: '0' net.ipv4.conf.m-pub.rp_filter: '0' vm.max_map_count: '262144' logicalVolumes: - name: root size: 0 vg: lvm_root - name: lvp size: 0 vg: lvm_lvp postDeployScript: | #!/bin/bash -ex # used for test-debug only! That would allow operator to logic via TTY. echo "root:r00tme" | sudo chpasswd # Just an example for enforcing "ssd" disks to be switched to use "deadline" i\o scheduler. echo 'ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="deadline"' > /etc/udev/ rules.d/60-ssd-scheduler.rules echo $(date) 'post_deploy_script done' >> /root/post_deploy_done preDeployScript: | #!/bin/bash -ex echo "$(date) pre_deploy_script done" >> /root/pre_deploy_done volumeGroups: - devices: - partition: lvm_root_part name: lvm_root - devices: - partition: lvm_lvp_part - partition: lvm_lvp_part_sdf name: lvm_lvp - devices: - partition: lvm_dummy_part name: lvm_forawesomeapp
Note
If you mount the
/vardirectory, before configuringBareMetalHostProfile, review Mounting recommendations for the /var directory.Create the
L2Templateobjects:mosk-ns_L2Template_bm-1490-template-controls-netplan.yamlapiVersion: ipam.mirantis.com/v1alpha1 kind: L2Template metadata: labels: bm-1490-template-controls-netplan: anymagicstring cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/provider: baremetal name: bm-1490-template-controls-netplan namespace: mosk-ns spec: ifMapping: - enp9s0f0 - enp9s0f1 - eno1 - ens3f1 l3Layout: - scope: namespace subnetName: lcm-nw - scope: namespace subnetName: storage-frontend - scope: namespace subnetName: storage-backend - scope: namespace subnetName: metallb-public-for-extiface npTemplate: |- version: 2 ethernets: {{nic 0}}: dhcp4: false dhcp6: false match: macaddress: {{mac 0}} set-name: {{nic 0}} mtu: 1500 {{nic 1}}: dhcp4: false dhcp6: false match: macaddress: {{mac 1}} set-name: {{nic 1}} mtu: 1500 {{nic 2}}: dhcp4: false dhcp6: false match: macaddress: {{mac 2}} set-name: {{nic 2}} mtu: 1500 {{nic 3}}: dhcp4: false dhcp6: false match: macaddress: {{mac 3}} set-name: {{nic 3}} mtu: 1500 bonds: bond0: parameters: mode: 802.3ad #transmit-hash-policy: layer3+4 #mii-monitor-interval: 100 interfaces: - {{ nic 0 }} - {{ nic 1 }} bond1: parameters: mode: 802.3ad #transmit-hash-policy: layer3+4 #mii-monitor-interval: 100 interfaces: - {{ nic 2 }} - {{ nic 3 }} vlans: stor-f: id: 1494 link: bond1 addresses: - {{ip "stor-f:storage-frontend"}} stor-b: id: 1489 link: bond1 addresses: - {{ip "stor-b:storage-backend"}} m-pub: id: 1491 link: bond0 bridges: k8s-ext: interfaces: [m-pub] addresses: - {{ ip "k8s-ext:metallb-public-for-extiface" }} k8s-lcm: dhcp4: false dhcp6: false gateway4: {{ gateway_from_subnet "lcm-nw" }} addresses: - {{ ip "k8s-lcm:lcm-nw" }} nameservers: addresses: [ 172.18.176.6 ] interfaces: - bond0
mosk-ns_L2Template_bm-1490-template-workers-netplan.yamlapiVersion: ipam.mirantis.com/v1alpha1 kind: L2Template metadata: labels: bm-1490-template-workers-netplan: anymagicstring cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/provider: baremetal name: bm-1490-template-workers-netplan namespace: mosk-ns spec: ifMapping: - eno1 - eno2 - ens7f0 - ens7f1 l3Layout: - scope: namespace subnetName: lcm-nw - scope: namespace subnetName: storage-frontend - scope: namespace subnetName: storage-backend - scope: namespace subnetName: metallb-public-for-extiface npTemplate: |- version: 2 ethernets: {{nic 0}}: match: macaddress: {{mac 0}} set-name: {{nic 0}} mtu: 1500 {{nic 1}}: dhcp4: false dhcp6: false match: macaddress: {{mac 1}} set-name: {{nic 1}} mtu: 1500 {{nic 2}}: dhcp4: false dhcp6: false match: macaddress: {{mac 2}} set-name: {{nic 2}} mtu: 1500 {{nic 3}}: dhcp4: false dhcp6: false match: macaddress: {{mac 3}} set-name: {{nic 3}} mtu: 1500 bonds: bond0: interfaces: - {{ nic 1 }} bond1: parameters: mode: 802.3ad #transmit-hash-policy: layer3+4 #mii-monitor-interval: 100 interfaces: - {{ nic 2 }} - {{ nic 3 }} vlans: stor-f: id: 1494 link: bond1 addresses: - {{ip "stor-f:storage-frontend"}} stor-b: id: 1489 link: bond1 addresses: - {{ip "stor-b:storage-backend"}} m-pub: id: 1491 link: {{ nic 1 }} bridges: k8s-lcm: interfaces: - {{ nic 0 }} gateway4: {{ gateway_from_subnet "lcm-nw" }} addresses: - {{ ip "k8s-lcm:lcm-nw" }} nameservers: addresses: [ 172.18.176.6 ] k8s-ext: interfaces: [m-pub]
mosk-ns_L2Template_bm-1490-template-controls-netplan-cz7700-pxebond.yamlapiVersion: ipam.mirantis.com/v1alpha1 kind: L2Template metadata: labels: bm-1490-template-controls-netplan-cz7700-pxebond: anymagicstring cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/provider: baremetal name: bm-1490-template-controls-netplan-cz7700-pxebond namespace: mosk-ns spec: ifMapping: - enp9s0f0 - enp9s0f1 - eno1 - ens3f1 l3Layout: - scope: namespace subnetName: lcm-nw - scope: namespace subnetName: storage-frontend - scope: namespace subnetName: storage-backend - scope: namespace subnetName: metallb-public-for-extiface npTemplate: |- version: 2 ethernets: {{nic 0}}: dhcp4: false dhcp6: false match: macaddress: {{mac 0}} set-name: {{nic 0}} mtu: 1500 {{nic 1}}: dhcp4: false dhcp6: false match: macaddress: {{mac 1}} set-name: {{nic 1}} mtu: 1500 {{nic 2}}: dhcp4: false dhcp6: false match: macaddress: {{mac 2}} set-name: {{nic 2}} mtu: 1500 {{nic 3}}: dhcp4: false dhcp6: false match: macaddress: {{mac 3}} set-name: {{nic 3}} mtu: 1500 bonds: bond0: parameters: mode: 802.3ad #transmit-hash-policy: layer3+4 #mii-monitor-interval: 100 interfaces: - {{ nic 0 }} - {{ nic 1 }} bond1: parameters: mode: 802.3ad #transmit-hash-policy: layer3+4 #mii-monitor-interval: 100 interfaces: - {{ nic 2 }} - {{ nic 3 }} vlans: stor-f: id: 1494 link: bond1 addresses: - {{ip "stor-f:storage-frontend"}} stor-b: id: 1489 link: bond1 addresses: - {{ip "stor-b:storage-backend"}} m-pub: id: 1491 link: bond0 bridges: k8s-ext: interfaces: [m-pub] addresses: - {{ ip "k8s-ext:metallb-public-for-extiface" }} k8s-lcm: dhcp4: false dhcp6: false gateway4: {{ gateway_from_subnet "lcm-nw" }} addresses: - {{ ip "k8s-lcm:lcm-nw" }} nameservers: addresses: [ 172.18.176.6 ] interfaces: - bond0
Create the
Subnetobjects:mosk-ns_Subnet_lcm-nw.yamlapiVersion: ipam.mirantis.com/v1alpha1 kind: Subnet metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster ipam/SVC-k8s-lcm: '1' kaas.mirantis.com/provider: baremetal name: lcm-nw namespace: mosk-ns spec: cidr: 172.16.170.0/24 excludeRanges: - 172.16.170.150 gateway: 172.16.170.1 includeRanges: - 172.16.170.150-172.16.170.250
mosk-ns_Subnet_api-lb.yamlMandatory. Mutually exclusive with the deprecated
cluster:spec:loadBalancerHostparameter defined inmosk-ns_Cluster_managed-cluster.yaml.apiVersion: ipam.mirantis.com/v1alpha1 kind: Subnet metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/provider: baremetal ipam/SVC-LBhost: "1" name: api-lb namespace: mosk-ns spec: cidr: 172.16.168.0/24 includeRanges: - 172.16.168.3
mosk-ns_Subnet_metallb-public-for-extiface.yamlapiVersion: ipam.mirantis.com/v1alpha1 kind: Subnet metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/provider: baremetal name: metallb-public-for-extiface namespace: mosk-ns spec: cidr: 172.16.168.0/24 gateway: 172.16.168.1 includeRanges: - 172.16.168.10-172.16.168.30
mosk-ns_Subnet_storage-backend.yamlapiVersion: ipam.mirantis.com/v1alpha1 kind: Subnet metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster ipam/SVC-ceph-cluster: '1' kaas.mirantis.com/provider: baremetal name: storage-backend namespace: mosk-ns spec: cidr: 10.12.0.0/24
mosk-ns_Subnet_storage-frontend.yamlapiVersion: ipam.mirantis.com/v1alpha1 kind: Subnet metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster ipam/SVC-ceph-public: '1' kaas.mirantis.com/provider: baremetal name: storage-frontend namespace: mosk-ns spec: cidr: 10.12.1.0/24
Create MetalLB configuration objects:
mosk-ns_MetalLBConfig-lb-mosk.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: MetalLBConfig metadata: labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster kaas.mirantis.com/provider: baremetal name: lb-mosk namespace: mosk-ns spec: ipAddressPools: - name: services spec: addresses: - 172.16.168.31-172.16.168.50 autoAssign: true avoidBuggyIPs: false l2Advertisements: - name: services spec: interfaces: - k8s-ext ipAddressPools: - services
Create the
PublicKeyobject for a MOSK cluster connection. For details, see PublicKey resource.mosk-ns_PublicKey_mosk-cluster-key.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: PublicKey metadata: name: mosk-cluster-key namespace: mosk-ns spec: publicKey: ssh-rsa AAEXAMPLEXXX
Create the
Clusterobject. For details, see Cluster resource.mosk-ns_Cluster_mosk-cluster.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Cluster metadata: annotations: kaas.mirantis.com/lcm: 'true' labels: kaas.mirantis.com/provider: baremetal name: mosk-cluster namespace: mosk-ns spec: clusterNetwork: pods: cidrBlocks: - 192.169.0.0/16 serviceDomain: '' services: cidrBlocks: - 10.232.0.0/18 providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 dedicatedControlPlane: false helmReleases: - name: ceph-controller - enabled: true name: stacklight values: alertmanagerSimpleConfig: email: enabled: false slack: enabled: false logging: persistentVolumeClaimSize: 30Gi highAvailabilityEnabled: false logging: enabled: false prometheusServer: customAlerts: [] persistentVolumeClaimSize: 16Gi retentionSize: 15GB retentionTime: 15d watchDogAlertEnabled: false - name: metallb values: {} kind: BaremetalClusterProviderSpec # loadBalancerHost: 172.16.168.3 # deprecated # configure mosk-ns_Subnet_api-lb.yaml instead. publicKeys: - name: mosk-cluster-key release: mke-5-16-0-3-3-6
Create the
Machineobjects linked to eachbmhnode. For details, see Machine resource.mosk-ns_Machine_cz7700-mosk-cluster-control-noefi-.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: cz7700-mosk-cluster-control-noefi- labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster cluster.sigs.k8s.io/control-plane: controlplane hostlabel.bm.kaas.mirantis.com/controlplane: controlplane kaas.mirantis.com/provider: baremetal namespace: mosk-ns spec: providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 hostSelector: matchLabels: kaas.mirantis.com/baremetalhost-id: cz7700 kind: BareMetalMachineProviderSpec l2TemplateSelector: label: bm-1490-template-controls-netplan-cz7700-pxebond publicKeys: - name: mosk-cluster-key
mosk-ns_Machine_cz7741-mosk-cluster-control-noefi-.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: cz7741-mosk-cluster-control-noefi- labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster cluster.sigs.k8s.io/control-plane: controlplane hostlabel.bm.kaas.mirantis.com/controlplane: controlplane kaas.mirantis.com/provider: baremetal namespace: mosk-ns spec: providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 bareMetalHostProfile: name: bmhp-cluster-default namespace: mosk-ns hostSelector: matchLabels: kaas.mirantis.com/baremetalhost-id: cz7741 kind: BareMetalMachineProviderSpec l2TemplateSelector: label: bm-1490-template-controls-netplan publicKeys: - name: mosk-cluster-key
mosk-ns_Machine_cz7743-mosk-cluster-control-noefi-.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: cz7743-mosk-cluster-control-noefi- labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster cluster.sigs.k8s.io/control-plane: controlplane hostlabel.bm.kaas.mirantis.com/controlplane: controlplane kaas.mirantis.com/provider: baremetal namespace: mosk-ns spec: providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 bareMetalHostProfile: name: bmhp-cluster-default namespace: mosk-ns hostSelector: matchLabels: kaas.mirantis.com/baremetalhost-id: cz7743 kind: BareMetalMachineProviderSpec l2TemplateSelector: label: bm-1490-template-controls-netplan publicKeys: - name: mosk-cluster-key
mosk-ns_Machine_cz812-mosk-cluster-storage-worker-noefi-.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: cz812-mosk-cluster-storage-worker-noefi- labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster hostlabel.bm.kaas.mirantis.com/storage: storage hostlabel.bm.kaas.mirantis.com/worker: worker kaas.mirantis.com/provider: baremetal namespace: mosk-ns spec: providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 bareMetalHostProfile: name: worker-storage1 namespace: mosk-ns hostSelector: matchLabels: kaas.mirantis.com/baremetalhost-id: cz812 kind: BareMetalMachineProviderSpec l2TemplateSelector: label: bm-1490-template-workers-netplan publicKeys: - name: mosk-cluster-key
mosk-ns_Machine_cz813-mosk-cluster-storage-worker-noefi-.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: cz813-mosk-cluster-storage-worker-noefi- labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster hostlabel.bm.kaas.mirantis.com/storage: storage hostlabel.bm.kaas.mirantis.com/worker: worker kaas.mirantis.com/provider: baremetal namespace: mosk-ns spec: providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 bareMetalHostProfile: name: worker-storage1 namespace: mosk-ns hostSelector: matchLabels: kaas.mirantis.com/baremetalhost-id: cz813 kind: BareMetalMachineProviderSpec l2TemplateSelector: label: bm-1490-template-workers-netplan publicKeys: - name: mosk-cluster-key
mosk-ns_Machine_cz814-mosk-cluster-storage-worker-noefi-.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: cz814-mosk-cluster-storage-worker-noefi- labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster hostlabel.bm.kaas.mirantis.com/storage: storage hostlabel.bm.kaas.mirantis.com/worker: worker kaas.mirantis.com/provider: baremetal namespace: mosk-ns spec: providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 bareMetalHostProfile: name: worker-storage1 namespace: mosk-ns hostSelector: matchLabels: kaas.mirantis.com/baremetalhost-id: cz814 kind: BareMetalMachineProviderSpec l2TemplateSelector: label: bm-1490-template-workers-netplan publicKeys: - name: mosk-cluster-key
mosk-ns_Machine_cz815-mosk-cluster-worker-noefi-.yamlapiVersion: cluster.k8s.io/v1alpha1 kind: Machine metadata: generateName: cz815-mosk-cluster-worker-noefi- labels: cluster.sigs.k8s.io/cluster-name: mosk-cluster hostlabel.bm.kaas.mirantis.com/worker: worker kaas.mirantis.com/provider: baremetal si-role/node-for-delete: 'true' namespace: mosk-ns spec: providerSpec: value: apiVersion: baremetal.k8s.io/v1alpha1 bareMetalHostProfile: name: worker-storage1 namespace: mosk-ns hostSelector: matchLabels: kaas.mirantis.com/baremetalhost-id: cz815 kind: BareMetalMachineProviderSpec l2TemplateSelector: label: bm-1490-template-workers-netplan publicKeys: - name: mosk-cluster-key
Verify that the
bmhnodes are in theprovisioningstate:KUBECONFIG=kubectl kubectl -n mosk-ns get bmh -o wide
Example of system response:
NAME STATUS STATE CONSUMER BMC BOOTMODE ONLINE ERROR cz7700-mosk-cluster-control-noefi OK provisioning cz7700-mosk-cluster-control-noefi-8bkqw 192.168.1.12 legacy true cz7741-mosk-cluster-control-noefi OK provisioning cz7741-mosk-cluster-control-noefi-42tp2 192.168.1.76 legacy true cz7743-mosk-cluster-control-noefi OK provisioning cz7743-mosk-cluster-control-noefi-8cwpw 192.168.1.78 legacy true ...
Wait until all
bmhnodes are in theprovisionedstate.Verify that the
lcmmachinephase has started:KUBECONFIG=kubeconfig kubectl -n mosk-ns get lcmmachines -o wide
Example of system response:
NAME CLUSTERNAME TYPE STATE INTERNALIP HOSTNAME AGENTVERSION cz7700-mosk-cluster-control-noefi-8bkqw mosk-cluster control Deploy 172.16.170.153 kaas-node-803721b4-227c-4675-acc5-15ff9d3cfde2 v0.2.0-349-g4870b7f5 cz7741-mosk-cluster-control-noefi-42tp2 mosk-cluster control Prepare 172.16.170.152 kaas-node-6b8f0d51-4c5e-43c5-ac53-a95988b1a526 v0.2.0-349-g4870b7f5 cz7743-mosk-cluster-control-noefi-8cwpw mosk-cluster control Prepare 172.16.170.151 kaas-node-e9b7447d-5010-439b-8c95-3598518f8e0a v0.2.0-349-g4870b7f5 ...
Verify that the
lcmmachinephase is complete and the Kubernetes cluster is created:KUBECONFIG=kubeconfig kubectl -n mosk-ns get lcmmachines -o wide
Example of system response:
NAME CLUSTERNAME TYPE STATE INTERNALIP HOSTNAME AGENTVERSION cz7700-mosk-cluster-control-noefi-8bkqw mosk-cluster control Ready 172.16.170.153 kaas-node-803721b4-227c-4675-acc5-15ff9d3cfde2 v0.2.0-349-g4870b7f5 cz7741-mosk-cluster-control-noefi-42tp2 mosk-cluster control Ready 172.16.170.152 kaas-node-6b8f0d51-4c5e-43c5-ac53-a95988b1a526 v0.2.0-349-g4870b7f5 cz7743-mosk-cluster-control-noefi-8cwpw mosk-cluster control Ready 172.16.170.151 kaas-node-e9b7447d-5010-439b-8c95-3598518f8e0a v0.2.0-349-g4870b7f5 ...
Create the
KaaSCephClusterobject:mosk-ns_KaaSCephCluster_ceph-cluster-mosk-cluster.yamlapiVersion: kaas.mirantis.com/v1alpha1 kind: KaaSCephCluster metadata: name: ceph-cluster-mosk-cluster namespace: mosk-ns spec: cephClusterSpec: nodes: # Add the exact ``nodes`` names. # Obtain the name from "get bmh -o wide" ``consumer`` field. cz812-mosk-cluster-storage-worker-noefi-58spl: roles: - mgr - mon # All disk configuration must be reflected in ``baremetalhostprofile`` storageDevices: - config: deviceClass: ssd fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231434939 cz813-mosk-cluster-storage-worker-noefi-lr4k4: roles: - mgr - mon storageDevices: - config: deviceClass: ssd fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231440912 cz814-mosk-cluster-storage-worker-noefi-z2m67: roles: - mgr - mon storageDevices: - config: deviceClass: ssd fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231443409 pools: - default: true deviceClass: ssd name: kubernetes replicated: size: 3 role: kubernetes k8sCluster: name: mosk-cluster namespace: mosk-ns
Obtain
kubeconfigof the newly created MOSK cluster:KUBECONFIG=kubeconfig kubectl -n mosk-ns get secrets mosk-cluster-kubeconfig -o jsonpath='{.data.admin\.conf}' | base64 -d | tee mosk.kubeconfig
Verify the status of the Ceph cluster in your MOSK cluster:
KUBECONFIG=mosk.kubeconfig kubectl -n rook-ceph exec -it $(KUBECONFIG=mosk.kubeconfig kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') -- ceph -s
Example of system response:
cluster: id: e75c6abd-c5d5-4ae8-af17-4711354ff8ef health: HEALTH_OK services: mon: 3 daemons, quorum a,b,c (age 55m) mgr: a(active, since 55m) osd: 3 osds: 3 up (since 54m), 3 in (since 54m) data: pools: 1 pools, 32 pgs objects: 273 objects, 555 MiB usage: 4.0 GiB used, 1.6 TiB / 1.6 TiB avail pgs: 32 active+clean io: client: 51 KiB/s wr, 0 op/s rd, 4 op/s wr