This section explains how to create a Ceph cluster on top of a managed cluster
using the Mirantis Container Cloud CLI. As a result, you will deploy a Ceph
cluster with minimum three Ceph nodes that provide persistent volumes to
the Kubernetes workloads for your managed cluster.
Note
For the advanced configuration through the KaaSCephCluster custom
resource, see Ceph advanced configuration.
Substitute <managedClusterProject> and <clusterName> with
the corresponding managed cluster namespace and name accordingly.
Example output:
status:providerStatus:ready:trueconditions:-message:Helm charts are successfully installed(upgraded).ready:truetype:Helm-message:Kubernetes objects are fully up.ready:truetype:Kubernetes-message:All requested nodes are ready.ready:truetype:Nodes-message:Maintenance state of the cluster is falseready:truetype:Maintenance-message:TLS configuration settings are appliedready:truetype:TLS-message:Kubelet is Ready on all nodes belonging to the clusterready:truetype:Kubelet-message:Swarm is Ready on all nodes belonging to the clusterready:truetype:Swarm-message:All provider instances of the cluster are Readyready:truetype:ProviderInstance-message:LCM agents have the latest versionready:truetype:LCMAgent-message:StackLight is fully up.ready:truetype:StackLight-message:OIDC configuration has been applied.ready:truetype:OIDC-message:Load balancer 10.100.91.150 for kubernetes API has status HEALTHYready:truetype:LoadBalancer
Create a YAML file with the Ceph cluster specification:
<publicNet> is a CIDR definition or comma-separated list of
CIDR definitions (if the managed cluster uses multiple networks) of
public network for the Ceph data. The values should match the
corresponding values of the cluster Subnet object.
<clusterNet> is a CIDR definition or comma-separated list of
CIDR definitions (if the managed cluster uses multiple networks) of
replication network for the Ceph data. The values should match
the corresponding values of the cluster Subnet object.
Configure Subnet objects for the Storage access network by setting
ipam/SVC-ceph-public:"1" and ipam/SVC-ceph-cluster:"1" labels
to the corresponding Subnet objects. For more details, refer to
Create subnets for a managed cluster using CLI, Step 5.
Configure Ceph Manager and Ceph Monitor roles to select nodes that
should place Ceph Monitor and Ceph Manager daemons:
Obtain the names of the machines to place Ceph Monitor and Ceph
Manager daemons at:
kubectl-n<managedClusterProject>getmachine
Add the nodes section with mon and mgr roles defined:
Substitute <mgr-node-X> with the corresponding Machine object
names and <role-X> with the corresponding roles of daemon placement,
for example, mon or mgr.
Configure Ceph OSD daemons for Ceph cluster data storage:
Note
This step involves the deployment of Ceph Monitor and Ceph Manager
daemons on nodes that are different from the ones hosting Ceph cluster
OSDs. However, it is also possible to colocate Ceph OSDs, Ceph Monitor,
and Ceph Manager daemons on the same nodes. You can achieve this by
configuring the roles and storageDevices sections accordingly.
This kind of configuration flexibility is particularly useful in
scenarios such as hyper-converged clusters.
Warning
The minimal production cluster requires at least three nodes
for Ceph Monitor daemons and three nodes for Ceph OSDs.
Obtain the names of the machines with disks intended for storing Ceph
data:
kubectl-n<managedClusterProject>getmachine
For each machine, use status.providerStatus.hardware.storage
to obtain information about node disks:
Select by-id symlinks on the disks to be used in the Ceph cluster.
The symlinks should meet the following requirements:
A by-id symlink should contain
status.providerStatus.hardware.storage.serialNumber
A by-id symlink should not contain wwn
For the example above, if you are willing to use the sdc disk
to store Ceph data on it, use the
/dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc symlink.
It will be persistent and will not be affected by node reboot.
Specify selected by-id symlinks in the
spec.cephClusterSpec.nodes.storageDevices.fullPath field
along with the
spec.cephClusterSpec.nodes.storageDevices.config.deviceClass
field:
<storage-node-X> with the corresponding Machine
object names
<byIDSymlink-X> with the obtained by-id symlinks from
status.providerStatus.hardware.storage.byIDs
<deviceClass-X> with the obtained disk types from
status.providerStatus.hardware.storage.type
Before Container Cloud 2.25.0
Specify selected by-id symlinks in the
spec.cephClusterSpec.nodes.storageDevices.name field
along with the
spec.cephClusterSpec.nodes.storageDevices.config.deviceClass
field: