Add a Ceph cluster¶
After you add machines to your new bare metal managed cluster as described in Add a machine, create a Ceph cluster on top of this managed cluster.
For an advanced configuration through the KaaSCephCluster
CR, see
Ceph advanced configuration. To configure Ceph Controller through Kubernetes
templates to manage Ceph node resources, see Enable Ceph tolerations and resources management.
The procedure below enables you to create a Ceph cluster with minimum three Ceph nodes that provides persistent volumes to the Kubernetes workloads in the managed cluster.
Create a Ceph cluster using the CLI¶
Verify that the overall status of the managed cluster is ready with all conditions in the
Ready
state:kubectl -n <managedClusterProject> get cluster <clusterName> -o yaml
Substitute
<managedClusterProject>
and<clusterName>
with the corresponding managed cluster namespace and name.Example of system response:
status: providerStatus: ready: true conditions: - message: Helm charts are successfully installed(upgraded). ready: true type: Helm - message: Kubernetes objects are fully up. ready: true type: Kubernetes - message: All requested nodes are ready. ready: true type: Nodes - message: Maintenance state of the cluster is false ready: true type: Maintenance - message: TLS configuration settings are applied ready: true type: TLS - message: Kubelet is Ready on all nodes belonging to the cluster ready: true type: Kubelet - message: Swarm is Ready on all nodes belonging to the cluster ready: true type: Swarm - message: All provider instances of the cluster are Ready ready: true type: ProviderInstance - message: LCM agents have the latest version ready: true type: LCMAgent - message: StackLight is fully up. ready: true type: StackLight - message: OIDC configuration has been applied. ready: true type: OIDC - message: Load balancer 10.100.91.150 for kubernetes API has status HEALTHY ready: true type: LoadBalancer
Create a YAML file with the Ceph cluster specification:
apiVersion: kaas.mirantis.com/v1alpha1 kind: KaaSCephCluster metadata: name: <cephClusterName> namespace: <managedClusterProject> spec: k8sCluster: name: <clusterName> namespace: <managedClusterProject>
Substitute
<cephClusterName>
with the required name of the Ceph cluster. This name will be used in the Ceph LCM operations.Add explicit network configuration of the Ceph cluster using the
network
section:spec: cephClusterSpec: network: publicNet: <publicNet> clusterNet: <clusterNet>
Substitute the following values that should match the corresponding values of the cluster
Subnet
object:<publicNet>
is a CIDR definition or comma-separated list of CIDR definitions (if the managed cluster uses multiple networks) of public network for the Ceph data.<clusterNet>
is a CIDR definition or comma-separated list of CIDR definitions (if the managed cluster uses multiple networks) of replication network for the Ceph data.
Configure Ceph Manager and Ceph Monitor roles to select nodes that must place Ceph Monitor and Ceph Manager daemons:
Obtain the names of machines to place Ceph Monitor and Ceph Manager daemons at:
kubectl -n <managedClusterProject> get machine
Add the
nodes
section withmon
andmgr
roles defined:spec: cephClusterSpec: nodes: <mgr-node-1>: roles: - <role-1> - <role-2> ... <mgr-node-2>: roles: - <role-1> - <role-2> ...
Substitute
<mgr-node-X>
with the correspondingMachine
object names and<role-X>
with the corresponding roles of daemon placement, for example,mon
ormgr
.For other optional node parameters, see Ceph advanced configuration.
Configure Ceph OSD daemons for Ceph cluster data storage:
Note
This step involves the deployment of Ceph Monitor and Ceph Manager daemons on nodes that are different from the ones hosting Ceph cluster OSDs. However, you can also colocate Ceph OSDs, Ceph Monitor, and Ceph Manager daemons on the same nodes by configuring the
roles
andstorageDevices
sections accordingly. This kind of configuration flexibility is particularly useful in scenarios such as hyper-converged clusters.Warning
The minimal production cluster requires at least three nodes for Ceph Monitor daemons and three nodes for Ceph OSDs.
Obtain the names of machines with disks intended for storing Ceph data:
kubectl -n <managedClusterProject> get machine
For each machine, use
status.providerStatus.hardware.storage
to obtain information about node disks:kubectl -n <managedClusterProject> get machine <machineName> -o yaml
Example of system response with machine hardware details
status: providerStatus: hardware: storage: - byID: /dev/disk/by-id/wwn-0x05ad99618d66a21f byIDs: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_05ad99618d66a21f - /dev/disk/by-id/scsi-305ad99618d66a21f - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_05ad99618d66a21f - /dev/disk/by-id/wwn-0x05ad99618d66a21f byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0 name: /dev/sda serialNumber: 05ad99618d66a21f size: 61 type: hdd - byID: /dev/disk/by-id/wwn-0x26d546263bd312b8 byIDs: - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_26d546263bd312b8 - /dev/disk/by-id/scsi-326d546263bd312b8 - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_26d546263bd312b8 - /dev/disk/by-id/wwn-0x26d546263bd312b8 byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:2 name: /dev/sdb serialNumber: 26d546263bd312b8 size: 32 type: hdd - byID: /dev/disk/by-id/wwn-0x2e52abb48862dbdc byIDs: - /dev/disk/by-id/lvm-pv-uuid-MncrcO-6cel-0QsB-IKaY-e8UK-6gDy-k2hOtf - /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_2e52abb48862dbdc - /dev/disk/by-id/scsi-32e52abb48862dbdc - /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc - /dev/disk/by-id/wwn-0x2e52abb48862dbdc byPath: /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 byPaths: - /dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:1 name: /dev/sdc serialNumber: 2e52abb48862dbdc size: 61 type: hdd
Select
by-id
symlinks on the disks to be used in the Ceph cluster. The symlinks must meet the following requirements:A
by-id
symlink must containstatus.providerStatus.hardware.storage.serialNumber
A
by-id
symlink must not containwwn
For the example above, to use the
sdc
disk to store Ceph data on it, select the/dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc
symlink. It is persistent and will not be affected by node reboot.Note
For details about storage device formats, see Mirantis Container Cloud Reference Architecture: Addressing storage devices.
Specify the selected
by-id
symlinks in thespec.cephClusterSpec.nodes.storageDevices.fullPath
field along with thespec.cephClusterSpec.nodes.storageDevices.config.deviceClass
field:spec: cephClusterSpec: nodes: <storage-node-1>: storageDevices: - fullPath: <byIDSymlink-1> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-2> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-3> config: deviceClass: <deviceClass-2> ... <storage-node-2>: storageDevices: - fullPath: <byIDSymlink-4> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-5> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-6> config: deviceClass: <deviceClass-2> <storage-node-3>: storageDevices: - fullPath: <byIDSymlink-7> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-8> config: deviceClass: <deviceClass-1> - fullPath: <byIDSymlink-9> config: deviceClass: <deviceClass-2>
Substitute the following values:
<storage-node-X>
with the correspondingMachine
object names<byIDSymlink-X>
with theby-id
symlinks obtained fromstatus.providerStatus.hardware.storage.byIDs
<deviceClass-X>
with the disk types obtained fromstatus.providerStatus.hardware.storage.type
Configure the pools for Image, Block Storage, and Compute services:
Note
Ceph validates the specified pools. Therefore, do not omit any of the following pools.
spec: pools: - default: true deviceClass: hdd name: kubernetes replicated: size: 3 role: kubernetes - default: false deviceClass: hdd name: volumes replicated: size: 3 role: volumes - default: false deviceClass: hdd name: vms replicated: size: 3 role: vms - default: false deviceClass: hdd name: backup replicated: size: 3 role: backup - default: false deviceClass: hdd name: images replicated: size: 3 role: images
Each Ceph pool, depending on its role, has the default
targetSizeRatio
value that defines the expected consumption of the total Ceph cluster capacity. The default ratio values for MOSK pools are as follows:20.0% for a Ceph pool with the role
volumes
40.0% for a Ceph pool with the role
vms
10.0% for a Ceph pool with the role
images
10.0% for a Ceph pool with the role
backup
Once all pools are created, verify that an appropriate secret required for a successful deployment of the OpenStack services that rely on Ceph is created in the
openstack-ceph-shared
namespace:kubectl -n openstack-ceph-shared get secrets openstack-ceph-keys
Example of a positive system response:
NAME TYPE DATA AGE openstack-ceph-keys Opaque 7 36m
Configure Ceph Object Storage to use OpenStack Swift Object Storage.
Example configuration:
spec: cephClusterSpec: objectStorage: rgw: dataPool: deviceClass: hdd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host gateway: instances: 3 port: 80 securePort: 8443 metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3 name: object-store preservePoolsOnDelete: false
When the Ceph cluster specification is complete, apply the built YAML file on the management cluster:
kubectl apply -f <kcc-template>.yaml
Substitue
<kcc-template>
with the name of the file containing theKaaSCephCluster
specification.The resulting example of the
KaaSCephCluster
templateapiVersion: kaas.mirantis.com/v1alpha1 kind: KaaSCephCluster metadata: name: kaas-ceph namespace: child-namespace spec: k8sCluster: name: child-cluster namespace: child-namespace cephClusterSpec: network: publicNet: 10.10.0.0/24 clusterNet: 10.11.0.0/24 nodes: master-1: roles: - mon - mgr master-2: roles: - mon - mgr master-3: roles: - mon - mgr worker-1: storageDevices: - fullPath: dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231443409 config: deviceClass: ssd worker-2: storageDevices: - fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231440912 config: deviceClass: ssd worker-3: storageDevices: - fullPath: /dev/disk/by-id/scsi-1ATA_WDC_WDS100T2B0A-00SM50_200231434939 config: deviceClass: ssd pools: - default: true deviceClass: hdd name: kubernetes replicated: size: 3 role: kubernetes - default: false deviceClass: hdd name: volumes replicated: size: 3 role: volumes - default: false deviceClass: hdd name: vms replicated: size: 3 role: vms - default: false deviceClass: hdd name: backup replicated: size: 3 role: backup - default: false deviceClass: hdd name: images replicated: size: 3 role: images objectStorage: rgw: dataPool: deviceClass: ssd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host gateway: instances: 3 port: 80 securePort: 8443 metadataPool: deviceClass: ssd failureDomain: host replicated: size: 3 name: object-store preservePoolsOnDelete: false
Wait for the
KaaSCephCluster
status and forstatus.shortClusterInfo.state
to becomeReady
:kubectl -n <managedClusterProject> get kcc -o yaml
Verify your Ceph cluster as described in Verify Ceph.
Create a Ceph cluster using the web UI¶
Warning
Mirantis highly recommends adding a Ceph cluster using the CLI instead of the web UI.
The web UI capabilities for adding a Ceph cluster are limited and lack flexibility in defining Ceph cluster specifications. For example, if an error occurs while adding a Ceph cluster using the web UI, usually you can address it only through the CLI.
The web UI functionality for managing Ceph cluster is going to be deprecated in one of the following releases.
Log in to the Container Cloud web UI with the
m:kaas:namespace@operator
orm:kaas:namespace@writer
permissions.Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.
In the Clusters tab, click the required cluster name. The Cluster page with the Machines and Ceph clusters lists opens.
In the Ceph Clusters block, click Create Cluster.
Configure the Ceph cluster in the Create New Ceph Cluster wizard that opens:
Create new Ceph cluster¶ Section
Parameter name
Description
General settings
Name
The Ceph cluster name.
Cluster Network
Replication network for Ceph OSDs. Must contain the CIDR definition and match the corresponding values of the cluster
L2Template
object or the environment network values.Public Network
Public network for Ceph data. Must contain the CIDR definition and match the corresponding values of the cluster
L2Template
object or the environment network values.Enable OSDs LCM
Select to enable LCM for Ceph OSDs.
Machines / Machine #1-3
Select machine
Select the name of the Kubernetes machine that will host the corresponding Ceph node in the Ceph cluster.
Manager, Monitor
Select the required Ceph services to install on the Ceph node.
Devices
Select the disk that Ceph will use.
Warning
Do not select the device for system services, for example,
sda
.Warning
A Ceph cluster does not support removable devices that are hosts with hotplug functionality enabled. To use devices as Ceph OSD data devices, make them non-removable or disable the hotplug functionality in the BIOS settings for disks that are configured to be used as Ceph OSD data devices.
Enable Object Storage
Select to enable the single-instance RGW Object Storage.
To add more Ceph nodes to the new Ceph cluster, click + next to any Ceph Machine title in the Machines tab. Configure a Ceph node as required.
Warning
Do not add more than 3
Manager
and/orMonitor
services to the Ceph cluster.After you add and configure all nodes in your Ceph cluster, click Create.
Open the
KaaSCephCluster
CR for editing as described in Ceph advanced configuration.Verify that the following snippet is present in the
KaaSCephCluster
configuration:network: clusterNet: 10.10.10.0/24 publicNet: 10.10.11.0/24
Configure the pools for Image, Block Storage, and Compute services.
Note
Ceph validates the specified pools. Therefore, do not omit any of the following pools.
spec: pools: - default: true deviceClass: hdd name: kubernetes replicated: size: 3 role: kubernetes - default: false deviceClass: hdd name: volumes replicated: size: 3 role: volumes - default: false deviceClass: hdd name: vms replicated: size: 3 role: vms - default: false deviceClass: hdd name: backup replicated: size: 3 role: backup - default: false deviceClass: hdd name: images replicated: size: 3 role: images
Each Ceph pool, depending on its role, has a default
targetSizeRatio
value that defines the expected consumption of the total Ceph cluster capacity. The default ratio values for MOSK pools are as follows:20.0% for a Ceph pool with role
volumes
40.0% for a Ceph pool with role
vms
10.0% for a Ceph pool with role
images
10.0% for a Ceph pool with role
backup
Once all pools are created, verify that an appropriate secret required for a successful deployment of the OpenStack services that rely on Ceph is created in the
openstack-ceph-shared
namespace:kubectl -n openstack-ceph-shared get secrets openstack-ceph-keys
Example of a positive system response:
NAME TYPE DATA AGE openstack-ceph-keys Opaque 7 36m
Verify your Ceph cluster as described in Verify Ceph.