Pelagia limitations¶
A Pelagia CephDeployment configuration includes but is
not limited to the following limitations:
The replication size for any Ceph pool must be set to more than 1.
Only one CRUSH tree per cluster. The separation of devices per Ceph pool is supported through Ceph Device Classes with only one pool of each type for a device class.
Only the following types of CRUSH buckets are supported:
topology.kubernetes.io/regiontopology.kubernetes.io/zonetopology.rook.io/datacentertopology.rook.io/roomtopology.rook.io/podtopology.rook.io/pdutopology.rook.io/rowtopology.rook.io/racktopology.rook.io/chassis
Only IPv4 is supported.
If two or more Ceph OSDs are located on the same device, there must be no dedicated WAL or DB for this class.
Only full collocation or dedicated WAL and DB configurations are supported.
The minimum size of any defined Ceph OSD device is
5G.Ceph OSDs support only raw disks as data devices meaning that no
dmorlvmdevices are allowed.Ceph cluster does not support removable devices (with hotplug enabled) for deploying Ceph OSDs.
When adding a Ceph node with the Ceph Monitor role, if any issues occur with the Ceph Monitor, Rook Ceph Operator removes it and adds a new Ceph Monitor instead, named using the next alphabetic character in order. Therefore, the Ceph Monitor names may not follow the alphabetic order. For example,
a,b,d, instead ofa,b,c.Reducing the number of Ceph Monitors is not supported and causes the Ceph Monitor daemons removal from random nodes.
Removal of the
mgrrole in thenodessection of theCephDeploymentCR does not remove Ceph Managers. To remove a Ceph Manager from a node, remove it from thenodesspec and manually delete therook-ceph-mgrpod in the Rook namespace.
See also