Limitations¶
A Ceph cluster configuration in Mirantis Container Cloud includes but is not limited to the following limitations:
Only one Ceph Controller per a managed cluster and only one Ceph cluster per Ceph Controller are supported.
The replication size for any Ceph pool must be set to more than 1.
Only one CRUSH tree per cluster. The separation of devices per Ceph pool is supported through device classes with only one pool of each type for a device class.
All CRUSH rules must have the same
failure_domain
.Only the following types of CRUSH buckets are supported:
topology.kubernetes.io/region
topology.kubernetes.io/zone
topology.rook.io/datacenter
topology.rook.io/room
topology.rook.io/pod
topology.rook.io/pdu
topology.rook.io/row
topology.rook.io/rack
topology.rook.io/chassis
Consuming an existing Ceph cluster is not supported.
CephFS is not fully supported TechPreview.
Only IPv4 is supported.
If two or more Ceph OSDs are located on the same device, there must be no dedicated WAL or DB for this class.
Only a full collocation or dedicated WAL and DB configurations are supported.
The minimum size of any defined Ceph OSD device is 5 GB.
Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.
Removal of the
mgr
role in thenodes
section of theKaaSCephCluster
CR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from thenodes
spec and manually delete themgr
pod in the Rook namespace.When adding a Ceph node with the Ceph Monitor role, if any issues occur with the Ceph Monitor,
rook-ceph
removes it and adds a new Ceph Monitor instead, named using the next alphabetic character in order. Therefore, the Ceph Monitor names may not follow the alphabetical order. For example,a
,b
,d
, instead ofa
,b
,c
.Ceph cluster does not support removable devices (with hotplug enabled) for deploying Ceph OSDs.