Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Add a Ceph cluster since MOSK 25.2¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph
custom
resource (CR), which is available since MOSK 25.2 to replace the deprecated
KaaSCephCluster
. For the equivalent procedure with the KaaSCephCluster
CR, refer to the following section:
After you add machines to your new MOSK cluster as described in Add a machine, create a Ceph cluster on top of this cluster using CLI.
For an advanced configuration through the MiraCeph
CR, see
Ceph advanced configuration. For the Ceph Controller configuration through
Kubernetes templates to manage Ceph nodes resources, see
Enable management of Ceph tolerations and resources.
The procedure below enables you to create a Ceph cluster with minimum three Ceph nodes that provides persistent volumes to the Kubernetes workloads in the MOSK cluster.
To create a Ceph cluster in the MOSK cluster using CLI:
On the management cluster, verify that the MOSK cluster overall status is ready with all conditions in the
Ready
state:kubectl -n <moskClusterProject> get cluster <clusterName> -o yaml
Substitute
<moskClusterProject>
and<clusterName>
with the corresponding managed cluster namespace and name accordingly.Example system response of a healthy cluster
status: providerStatus: ready: true conditions: - message: Helm charts are successfully installed(upgraded). ready: true type: Helm - message: Kubernetes objects are fully up. ready: true type: Kubernetes - message: All requested nodes are ready. ready: true type: Nodes - message: Maintenance state of the cluster is false ready: true type: Maintenance - message: TLS configuration settings are applied ready: true type: TLS - message: Kubelet is Ready on all nodes belonging to the cluster ready: true type: Kubelet - message: Swarm is Ready on all nodes belonging to the cluster ready: true type: Swarm - message: All provider instances of the cluster are Ready ready: true type: ProviderInstance - message: LCM agents have the latest version ready: true type: LCMAgent - message: StackLight is fully up. ready: true type: StackLight - message: OIDC configuration has been applied. ready: true type: OIDC - message: Load balancer 10.100.91.150 for kubernetes API has status HEALTHY ready: true type: LoadBalancer
Create a YAML file with the Ceph cluster specification:
apiVersion: lcm.mirantis.com/v1alpha1 kind: MiraCeph metadata: name: <cephClusterName> namespace: ceph-lcm-mirantis
Substitute
<cephClusterName>
with the required name for the Ceph cluster. This name will be used in the Ceph LCM operations.Add explicit network configuration of the Ceph cluster using the
network
section:spec: network: publicNet: <publicNet> clusterNet: <clusterNet>
Substitute the following values:
<publicNet>
is a CIDR definition or comma-separated list of CIDR definitions (if the MOSK cluster uses multiple networks) of public network for the Ceph data. The values must match the corresponding values of the clusterSubnet
object.<clusterNet>
is a CIDR definition or comma-separated list of CIDR definitions (if the MOSK cluster uses multiple networks) of replication network for the Ceph data. The values must match the corresponding values of the clusterSubnet
object.
Configure Ceph Manager and Ceph Monitor roles to select nodes that should place Ceph Monitor and Ceph Manager daemons:
On a MOSK cluster, obtain node names to place Ceph Monitor and Ceph Manager daemons at:
kubectl get nodes
Add the
nodes
section withmon
andmgr
roles defined:spec: nodes: - name: <mgr-node-1> roles: - <role-1> - <role-2> ... - name: <mgr-node-2> roles: - <role-1> - <role-2> ...
Substitute
<mgr-node-X>
with the corresponding node names and<role-X>
with the corresponding roles of daemon placement, for example,mon
ormgr
.For other optional node parameters, see Ceph advanced configuration.
Configure Ceph OSD daemons for Ceph cluster data storage:
Note
This step involves the deployment of Ceph Monitor and Ceph Manager daemons on nodes that are different from the ones hosting Ceph cluster OSDs. However, it is also possible to colocate Ceph OSDs, Ceph Monitor, and Ceph Manager daemons on the same nodes. You can achieve this by configuring the
roles
anddevices
sections accordingly. This kind of configuration flexibility is particularly useful in scenarios such as hyper-converged clusters.Warning
The minimal production cluster requires at least three nodes for Ceph Monitor daemons and three nodes for Ceph OSDs.
On the management cluster, obtain the names of the machines with disks intended for storing Ceph data:
kubectl -n <moskClusterProject> get machine
For each machine, use
status.providerStatus.hardware.storage
to obtain information about node disks:kubectl -n <moskClusterProject> get machine <machineName> -o yaml
Output example of the machine hardware details
status: providerStatus: hardware: storage: - byID: /dev/disk/by-id/wwn-0x05ad99618d66a21f byIDs: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_05ad99618d66a21f - /dev/disk/by-id/scsi-305ad99618d66a21f - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_05ad99618d66a21f - /dev/disk/by-id/wwn-0x05ad99618d66a21f byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0 name: /dev/sda serialNumber: 05ad99618d66a21f size: 61 type: hdd - byID: /dev/disk/by-id/wwn-0x26d546263bd312b8 byIDs: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_26d546263bd312b8 - /dev/disk/by-id/scsi-326d546263bd312b8 - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_26d546263bd312b8 - /dev/disk/by-id/wwn-0x26d546263bd312b8 byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 name: /dev/sdb serialNumber: 26d546263bd312b8 size: 32 type: hdd - byID: /dev/disk/by-id/wwn-0x2e52abb48862dbdc byIDs: - /dev/disk/by-id/lvm-pv-uuid-MncrcO-6cel-0QsB-IKaY-e8UK-6gDy-k2hOtf - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2e52abb48862dbdc - /dev/disk/by-id/scsi-32e52abb48862dbdc - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc - /dev/disk/by-id/wwn-0x2e52abb48862dbdc byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 name: /dev/sdc serialNumber: 2e52abb48862dbdc size: 61 type: hdd
Select
by-id
symlinks on the disks to be used in the Ceph cluster. The symlinks must meet the following requirements:A
by-id
symlink must containstatus.providerStatus.hardware.storage.serialNumber
A
by-id
symlink must not containwwn
For the example above, if you are going to use the
sdc
disk to store Ceph data on it, use the/dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc
symlink. It will be persistent and will not be affected by node reboot.Note
For details about storage device formats, see Addressing storage devices since MOSK 25.2.
For each machine, use
status.instanceName
to obtain the corresponding node name:kubectl -n <moskClusterProject> get machine <machineName> -o yaml
Output example of the machine hardware details:
status: instanceName: kaas-node-a99016f5-68da-450b-a0c8-06b1f9bb7131
Specify the selected
by-id
symlinks in thespec.nodes.devices.fullPath
field along with thespec.nodes.devices.config.deviceClass
field:Example configuration
spec: nodes: - name: <storage-node-1> devices: - fullPath: <byIDSymlink-1> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-2> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-3> config: deviceClass: <deviceClass-2> ... - name: <storage-node-2> devices: - fullPath: <byIDSymlink-4> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-5> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-6> config: deviceClass: <deviceClass-2> ... - name: <storage-node-3> devices: - fullPath: <byIDSymlink-7> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-8> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-9> config: deviceClass: <deviceClass-2> ...
Substitute the following values:
<storage-node-X>
with the corresponding node names<byIDSymlink-X>
with the obtainedby-id
symlinks fromstatus.providerStatus.hardware.storage.byIDs
<deviceClass-X>
with the obtained disk types fromstatus.providerStatus.hardware.storage.type
Configure the pools for Image, Block Storage, and Compute services.
Note
Ceph validates the specified pools. Therefore, do not omit any of the following pools.
Ceph pool configuration
spec: pools: - default: false deviceClass: hdd name: volumes replicated: size: 3 role: volumes - default: false deviceClass: hdd name: vms replicated: size: 3 role: vms - default: false deviceClass: hdd name: backup replicated: size: 3 role: backup - default: false deviceClass: hdd name: images replicated: size: 3 role: images
Each Ceph pool, depending on its role, has a default
targetSizeRatio
value that defines the expected consumption of the total Ceph cluster capacity. The default ratio values for MOSK pools are as follows:20.0% for a Ceph pool with role
volumes
40.0% for a Ceph pool with role
vms
10.0% for a Ceph pool with role
images
10.0% for a Ceph pool with role
backup
Mirantis highly recommends overriding
targetSizeRatio
defaults. For details, see Calculate target ratio for Ceph pools.Configure Ceph Block Pools to use RBD. For the detailed configuration, refer to Pool parameters. Example configuration:
spec: pools: - default: true deviceClass: hdd name: kubernetes replicated: size: 3 role: kubernetes
Configure Ceph Object Storage to use OpenStack Swift Object Storage.
Example configuration
spec: objectStorage: rgw: dataPool: deviceClass: hdd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host gateway: instances: 3 port: 80 securePort: 8443 metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3 name: object-store preservePoolsOnDelete: false
Optional. Configure Ceph Shared Filesystem to use CephFS. For the detailed configuration, refer to Configure Ceph Shared File System (CephFS).
Example configuration
spec: sharedFilesystem: cephFS: - name: cephfs-store dataPools: - name: cephfs-pool-1 deviceClass: hdd replicated: size: 3 failureDomain: host metadataPool: deviceClass: nvme replicated: size: 3 failureDomain: host metadataServer: activeCount: 1 activeStandby: false
When the Ceph cluster specification is complete, apply the built YAML file on the MOSK cluster:
kubectl apply -f <miraceph-template>.yaml
Substitue
<miraceph-template>
with the name of the file containing theMiraCeph
specification.The resulting example of the
MiraCeph
templateapiVersion: lcm.mirantis.com/v1alpha1 kind: MiraCeph metadata: name: kaas-ceph namespace: ceph-lcm-mirantis spec: network: publicNet: 10.10.0.0/24 clusterNet: 10.11.0.0/24 nodes: - name: master-1 roles: - mon - mgr - name: master-2 roles: - mon - mgr - name: master-3 roles: - mon - mgr - name: worker-1 devices: - fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231443409 config: deviceClass: ssd - name: worker-2 devices: - fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231440912 config: deviceClass: ssd - name: worker-3 devices: - fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231434939 config: deviceClass: ssd pools: - default: true deviceClass: hdd name: kubernetes replicated: size: 3 role: kubernetes - default: false deviceClass: hdd name: volumes replicated: size: 3 role: volumes - default: false deviceClass: hdd name: vms replicated: size: 3 role: vms - default: false deviceClass: hdd name: backup replicated: size: 3 role: backup - default: false deviceClass: hdd name: images replicated: size: 3 role: images objectStorage: rgw: dataPool: deviceClass: ssd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host gateway: instances: 3 port: 80 securePort: 8443 metadataPool: deviceClass: ssd failureDomain: host replicated: size: 3 name: object-store preservePoolsOnDelete: false
Wait for the
MiraCephHealth
object to be created and then forstatus.shortClusterInfo.state
to becomeReady
:kubectl -n ceph-lcm-mirantis get mchealth -o yaml
Verify the cluster as described in Verify Ceph.
Once all pools are created, verify that an appropriate secret required for a successful deployment of the OpenStack services that rely on Ceph is created in the
openstack-ceph-shared
namespace:kubectl -n openstack-ceph-shared get secrets openstack-ceph-keys
Example of a positive system response:
NAME TYPE DATA AGE openstack-ceph-keys Opaque 7 36m