A Ceph cluster configuration in Mirantis Container Cloud includes but is not limited to the following limitations:
Only one Ceph controller per a management, regional, or managed cluster and only one Ceph cluster per Ceph controller are supported.
The replication size for any Ceph pool must be set to more than 1.
Only one CRUSH tree per cluster. The separation of devices per Ceph pool is supported through device classes with only one pool of each type for a device class.
All CRUSH rules must have the same
Only the following types of CRUSH buckets are supported:
Consuming an existing Ceph cluster is not supported.
CephFS is not supported.
Only IPv4 is supported.
If two or more Ceph OSDs are located on the same device, there must be no dedicated WAL or DB for this class.
Only a full collocation or dedicated WAL and DB configurations are supported.
The minimum size of any defined Ceph OSD device is 5 GB.
Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.
Removal of the
mgrrole in the
nodessection of the
KaaSCephClusterCR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from the
nodesspec and manually delete the
mgrpod in the Rook namespace.
When adding a Ceph node with the Ceph Monitor role, if any issues occur with the Ceph Monitor,
rook-cephremoves it and adds a new Ceph Monitor instead, named using the next alphabetic character in order. Therefore, the Ceph Monitor names may not follow the alphabetical order. For example,
d, instead of