The procedure below enables you to create a Ceph cluster with minimum three
Ceph nodes that provides persistent volumes to the Kubernetes workloads
in the managed cluster.
Substitute <managedClusterProject> and <clusterName> with
the corresponding managed cluster namespace and name.
Example of system response:
status:providerStatus:ready:trueconditions:-message:Helm charts are successfully installed(upgraded).ready:truetype:Helm-message:Kubernetes objects are fully up.ready:truetype:Kubernetes-message:All requested nodes are ready.ready:truetype:Nodes-message:Maintenance state of the cluster is falseready:truetype:Maintenance-message:TLS configuration settings are appliedready:truetype:TLS-message:Kubelet is Ready on all nodes belonging to the clusterready:truetype:Kubelet-message:Swarm is Ready on all nodes belonging to the clusterready:truetype:Swarm-message:All provider instances of the cluster are Readyready:truetype:ProviderInstance-message:LCM agents have the latest versionready:truetype:LCMAgent-message:StackLight is fully up.ready:truetype:StackLight-message:OIDC configuration has been applied.ready:truetype:OIDC-message:Load balancer 10.100.91.150 for kubernetes API has status HEALTHYready:truetype:LoadBalancer
Create a YAML file with the Ceph cluster specification:
<publicNet> is a CIDR definition or comma-separated list of
CIDR definitions (if the managed cluster uses multiple networks) of
public network for the Ceph data. The values should match the
corresponding values of the cluster Subnet object.
<clusterNet> is a CIDR definition or comma-separated list of
CIDR definitions (if the managed cluster uses multiple networks) of
replication network for the Ceph data. The values should match
the corresponding values of the cluster Subnet object.
Configure Subnet objects for the Storage access network by setting
ipam/SVC-ceph-public:"1" and ipam/SVC-ceph-cluster:"1" labels
to the corresponding Subnet objects. For more details, refer to
Create subnets for a managed cluster using CLI, Step 5.
Configure Ceph Manager and Ceph Monitor roles to select nodes that must
place Ceph Monitor and Ceph Manager daemons:
Obtain the names of machines to place Ceph Monitor and Ceph Manager
daemons at:
kubectl-n<managedClusterProject>getmachine
Add the nodes section with mon and mgr roles defined:
Substitute <mgr-node-X> with the corresponding Machine object
names and <role-X> with the corresponding roles of daemon placement,
for example, mon or mgr.
Configure Ceph OSD daemons for Ceph cluster data storage:
Note
This step involves the deployment of Ceph Monitor and Ceph Manager
daemons on nodes that are different from the ones hosting Ceph cluster
OSDs. However, you can also colocate Ceph OSDs, Ceph Monitor, and
Ceph Manager daemons on the same nodes by configuring the roles and
storageDevices sections accordingly. This kind of configuration
flexibility is particularly useful in scenarios such as hyper-converged
clusters.
Warning
The minimal production cluster requires at least three nodes
for Ceph Monitor daemons and three nodes for Ceph OSDs.
Obtain the names of machines with disks intended for storing Ceph data:
kubectl-n<managedClusterProject>getmachine
For each machine, use status.providerStatus.hardware.storage
to obtain information about node disks:
Select by-id symlinks on the disks to be used in the Ceph cluster.
The symlinks must meet the following requirements:
A by-id symlink must contain
status.providerStatus.hardware.storage.serialNumber
A by-id symlink must not contain wwn
For the example above, to use the sdc disk to store Ceph data on it,
select the /dev/disk/by-id/scsi-SQEMU_QEMU_HARDDISK_2e52abb48862dbdc
symlink. It is persistent and will not be affected by node reboot.
Specify the selected by-id symlinks in the
spec.cephClusterSpec.nodes.storageDevices.fullPath field
along with the
spec.cephClusterSpec.nodes.storageDevices.config.deviceClass
field:
Specify the selected by-id symlinks in the
spec.cephClusterSpec.nodes.storageDevices.name field
along with the
spec.cephClusterSpec.nodes.storageDevices.config.deviceClass
field:
Each Ceph pool, depending on its role, has the default targetSizeRatio
value that defines the expected consumption of the total Ceph cluster
capacity. The default ratio values for MOSK pools are
as follows:
20.0% for a Ceph pool with the role volumes
40.0% for a Ceph pool with the role vms
10.0% for a Ceph pool with the role images
10.0% for a Ceph pool with the role backup
Optional. Configure Ceph Block Pools to use RBD. For the detailed
configuration, refer to Pool parameters.
Once all pools are created, verify that an appropriate secret required for
a successful deployment of the OpenStack services that rely on Ceph is
created in the openstack-ceph-shared namespace:
Mirantis highly recommends adding a Ceph cluster using the CLI
instead of the web UI.
The web UI capabilities for adding a Ceph cluster are limited and lack
flexibility in defining Ceph cluster specifications.
For example, if an error occurs while adding a Ceph cluster using the
web UI, usually you can address it only through the CLI.
The web UI functionality for managing Ceph cluster is going to be
deprecated in one of the following releases.
Log in to the Container Cloud web UI with the m:kaas:namespace@operator or
m:kaas:namespace@writer permissions.
Switch to the required project using the Switch Project
action icon located on top of the main left-side navigation panel.
In the Clusters tab, click the required cluster name.
The Cluster page with the Machines and
Ceph clusters lists opens.
In the Ceph Clusters block, click Create Cluster.
Configure the Ceph cluster in the Create New Ceph Cluster
wizard that opens:
Replication network for Ceph OSDs. Must contain the CIDR definition
and match the corresponding values of the cluster L2Template
object or the environment network values.
Public Network
Public network for Ceph data. Must contain the CIDR definition and
match the corresponding values of the cluster L2Template object
or the environment network values.
Enable OSDs LCM
Select to enable LCM for Ceph OSDs.
Machines / Machine #1-3
Select machine
Select the name of the Kubernetes machine that will host
the corresponding Ceph node in the Ceph cluster.
Manager, Monitor
Select the required Ceph services to install on the Ceph node.
Devices
Select the disk that Ceph will use.
Warning
Do not select the device for system services,
for example, sda.
Warning
A Ceph cluster does not support removable devices that are
hosts with hotplug functionality enabled. To use devices as Ceph OSD
data devices, make them non-removable or disable the hotplug
functionality in the BIOS settings for disks that are configured
to be used as Ceph OSD data devices.
Enable Object Storage
Select to enable the single-instance RGW Object Storage.
To add more Ceph nodes to the new Ceph cluster, click +
next to any Ceph Machine title in the Machines tab.
Configure a Ceph node as required.
Warning
Do not add more than 3 Manager and/or Monitor
services to the Ceph cluster.
After you add and configure all nodes in your Ceph cluster, click
Create.
Each Ceph pool, depending on its role, has a default targetSizeRatio
value that defines the expected consumption of the total Ceph cluster
capacity. The default ratio values for MOSK pools are
as follows:
20.0% for a Ceph pool with role volumes
40.0% for a Ceph pool with role vms
10.0% for a Ceph pool with role images
10.0% for a Ceph pool with role backup
Once all pools are created, verify that an appropriate secret required for
a successful deployment of the OpenStack services that rely on Ceph is
created in the openstack-ceph-shared namespace: