A Ceph cluster configuration in Mirantis Container Cloud includes but is not limited to the following limitations:
Only one Ceph Controller per a managed cluster and only one Ceph cluster per Ceph Controller are supported.
The replication size for any Ceph pool must be set to more than 1.
Only one CRUSH tree per cluster. The separation of devices per Ceph pool is supported through device classes with only one pool of each type for a device class.
Only the following types of CRUSH buckets are supported:
Only IPv4 is supported.
If two or more Ceph OSDs are located on the same device, there must be no dedicated WAL or DB for this class.
Only a full collocation or dedicated WAL and DB configurations are supported.
The minimum size of any defined Ceph OSD device is 5 GB.
Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.
Removal of the
mgrrole in the
nodessection of the
KaaSCephClusterCR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from the
nodesspec and manually delete the
mgrpod in the Rook namespace.
When adding a Ceph node with the Ceph Monitor role, if any issues occur with the Ceph Monitor,
rook-cephremoves it and adds a new Ceph Monitor instead, named using the next alphabetic character in order. Therefore, the Ceph Monitor names may not follow the alphabetical order. For example,
d, instead of
Ceph cluster does not support removable devices (with hotplug enabled) for deploying Ceph OSDs. This limitation has been lifted since Cluster releases 14.0.1 and 15.0.1 delivered in Container Cloud 2.24.2.
Ceph OSDs support only raw disks as data devices meaning that no
lvmdevices are allowed.
Ceph does not support allocation of Ceph RGW pods on nodes where the Federal Information Processing Standard (FIPS) mode is enabled.