Ceph advanced configuration¶
This section describes how to configure a Ceph cluster through the
KaaSCephCluster
(kaascephclusters.kaas.mirantis.com
) CR during or
after the deployment of a managed cluster.
The KaaSCephCluster
CR spec has two sections, cephClusterSpec
and
k8sCluster
and specifies the nodes to deploy as Ceph components. Based on
the roles definitions in the KaaSCephCluster
CR, Ceph Controller
automatically labels nodes for Ceph Monitors and Managers. Ceph OSDs are
deployed based on the storageDevices
parameter defined for each Ceph node.
For a default KaaSCephCluster
CR, see step 16 in Example of a complete L2 templates configuration for cluster creation.
To configure a Ceph cluster:
Select from the following options:
If you do not have a Container Cloud cluster yet, open
kaascephcluster.yaml.template
for editing.If the Container Cloud cluster is already deployed, open the
KaasCephCluster
CR of a managed cluster for editing:kubectl edit kaascephcluster -n <managedClusterProjectName>
Substitute
<managedClusterProjectName>
with a corresponding value.
Using the tables below, configure the Ceph cluster as required.
¶ Parameter
Description
cephClusterSpec
Describes a Ceph cluster in the Container Cloud cluster. For details on
cephClusterSpec
parameters, see the tables below.k8sCluster
Defines the cluster on which the
KaaSCephCluster
depends on. Use thek8sCluster
parameter if the name or namespace of the corresponding Container Cloud cluster differs from default one:spec: k8sCluster: name: kaas-mgmt namespace: default
¶ Parameter
Description
network
Specifies networks for the Ceph cluster:
clusterNet
- specifies a Classless Inter-Domain Routing (CIDR) for the Ceph OSD replication network.Warning
To avoid ambiguous behavior of Ceph daemons, do not specify
0.0.0.0/0
inclusterNet
. Otherwise, Ceph daemons can select an incorrect public interface that can cause the Ceph cluster to become unavailable. The bare metal provider automatically translates the0.0.0.0/0
network range to the default LCM IPAM subnet if it exists.Note
The
clusterNet
andpublicNet
parameters support multiple IP networks. For details, see Enable Ceph multinetwork.publicNet
- specifies a CIDR for communication between the service and operator.Warning
To avoid ambiguous behavior of Ceph daemons, do not specify
0.0.0.0/0
inpublicNet
. Otherwise, Ceph daemons can select an incorrect public interface that can cause the Ceph cluster to become unavailable. The bare metal provider automatically translates the0.0.0.0/0
network range to the default LCM IPAM subnet if it exists.Note
The
clusterNet
andpublicNet
parameters support multiple IP networks. For details, see Enable Ceph multinetwork.
nodes
Specifies the list of Ceph nodes. For details, see Node parameters. The
nodes
parameter is a map with machine names as keys and Ceph node specifications as values, for example:nodes: master-0: <node spec> master-1: <node spec> ... worker-0: <node spec>
nodeGroups
Specifies the list of Ceph nodes grouped by node lists or node labels. For details, see NodeGroups parameters. The
nodeGroups
parameter is a map with group names as keys and Ceph node specifications for defined nodes or node labels as values. For example:nodes: group-1: spec: <node spec> nodes: ["master-0", "master-1"] group-2: spec: <node spec> label: <nodeLabelExpression> ... group-3: spec: <node spec> nodes: ["worker-2", "worker-3"]
The
<nodeLabelExpression>
must be a valid Kubernetes label selector expression.pools
Specifies the list of Ceph pools. For details, see Pool parameters.
objectStorage
Specifies the parameters for Object Storage, such as RADOS Gateway, the Ceph Object Storage. Also specifies the RADOS Gateway Multisite configuration. For details, see RADOS Gateway parameters and Multisite parameters.
rookConfig
Optional. String key-value parameter that allows overriding Ceph configuration options.
Since Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0), use the
|
delimiter to specify the section where a parameter must be placed. For example,mon
orosd
. And, if required, use the.
delimiter to specify the exact number of the Ceph OSD or Ceph Monitor to apply an option to a specificmon
orosd
and override the configuration of the corresponding section.The use of this option enables restart of only specific daemons related to the corresponding section. If you do not specify the section, a parameter is set in the
global
section, which includes restart of all Ceph daemons except Ceph OSD.For example:
rookConfig: "osd_max_backfills": "64" "mon|mon_health_to_clog": "true" "osd|osd_journal_size": "8192" "osd.14|osd_journal_size": "6250"
extraOpts
Available since Container Cloud 2.25.0. Enables specification of extra options for a setup, includes the
deviceLabels
parameter.Refer to ExtraOpts parameters for the details.
ingress
Enables a custom ingress rule for public access on Ceph services, for example, Ceph RADOS Gateway. For details, see Enable TLS for Ceph public endpoints.
rbdMirror
Enables pools mirroring between two interconnected clusters. For details, see Enable Ceph RBD mirroring.
clients
List of Ceph clients. For details, see Clients parameters.
disableOsSharedKeys
Disables autogeneration of shared Ceph values for OpenStack deployments. Set to
false
by default.mgr
Contains the
mgrModules
parameter that should list the following keys:name
- Ceph Manager module nameenabled
- flag that defines whether the Ceph Manager module is enabled
For example:
mgr: mgrModules: - name: balancer enabled: true - name: pg_autoscaler enabled: true
The
balancer
andpg_autoscaler
Ceph Manager modules are enabled by default and cannot be disabled.Note
Most Ceph Manager modules require additional configuration that you can perform through the
ceph-tools
pod on a managed cluster.healthCheck
Configures health checks and liveness probe settings for Ceph daemons. For details, see HealthCheck parameters.
Example configuration
spec: cephClusterSpec: network: clusterNet: 10.10.10.0/24 publicNet: 10.10.11.0/24 nodes: master-0: <node spec> ... pools: - <pool spec> ... rookConfig: "mon max pg per osd": "600" ...
¶ Parameter
Description
roles
Specifies the
mon
,mgr
, orrgw
daemon to be installed on a Ceph node. You can place the daemons on any nodes upon your decision. Consider the following recommendations:The recommended number of Ceph Monitors in a Ceph cluster is 3. Therefore, at least 3 Ceph nodes must contain the
mon
item in theroles
parameter.The number of Ceph Monitors must be odd.
Do not add more than 2 Ceph Monitors at a time and wait until the Ceph cluster is
Ready
before adding more daemons.For better HA and fault tolerance, the number of
mgr
roles must equal the number ofmon
roles. Therefore, we recommend labeling at least 3 Ceph nodes with themgr
role.If
rgw
roles are not specified, allrgw
daemons will spawn on the same nodes withmon
daemons.
If a Ceph node contains a
mon
role, the Ceph Monitor Pod deploys on this node.If a Ceph node contains a
mgr
role, it informs the Ceph Controller that a Ceph Manager can be deployed on the node. Rook Operator selects the first available node to deploy the Ceph Manager on it:Before Container Cloud 2.22.0, only one Ceph Manager is deployed on a cluster.
Since Container Cloud 2.22.0, two Ceph Managers, active and stand-by, are deployed on a cluster.
If you assign the
mgr
role to three recommended Ceph nodes, one back-up Ceph node is available to redeploy a failed Ceph Manager in case of a server outage.
storageDevices
Specifies the list of devices to use for Ceph OSD deployment. Includes the following parameters:
Note
Since Container Cloud 2.25.0, Mirantis recommends migrating all
storageDevices
items toby-id
symlinks as persistent device identifiers.For details, refer to Addressing storage devices.
fullPath
- a storage device symlink. Accepts the following values:Since Container Cloud 2.25.0, the device
by-id
symlink that contains the serial number of the physical device and does not containwwn
. For example,/dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543
. Theby-id
symlink should be equal to the one ofMachine
statusstatus.providerStatus.hardware.storage.byIDs
list. Mirantis recommends using this field for definingby-id
symlinks.The device
by-path
symlink. For example,/dev/disk/by-path/pci-0000:00:11.4-ata-3
. Since Container Cloud 2.25.0, Mirantis does not recommend specifying storage devices with deviceby-path
symlinks because such identifiers are not persistent and can change at node boot.
This parameter is mutually exclusive with
name
.name
- a storage device name. Accepts the following values:The device name, for example,
sdc
. Since Container Cloud 2.25.0, Mirantis does not recommend specifying storage devices with device names because such identifiers are not persistent and can change at node boot.The device
by-id
symlink that contains the serial number of the physical device and does not containwwn
. For example,/dev/disk/by-id/nvme-SAMSUNG_MZ1LB3T8HMLA-00007_S46FNY0R394543
.The
by-id
symlink should be equal to the one ofMachine
statusstatus.providerStatus.hardware.storage.byIDs
list. Since Container Cloud 2.25.0, Mirantis recommends using thefullPath
field for definingby-id
symlinks instead.
This parameter is mutually exclusive with
fullPath
.config
- a map of device configurations that must contain a mandatorydeviceClass
parameter set tohdd
,ssd
, ornvme
. The device class must be defined in a pool and can optionally contain a metadata device, for example:storageDevices: - name: /dev/disk/by-id/scsi-SATA_HGST_HUS724040AL_PN1334PEHN18ZS config: deviceClass: hdd metadataDevice: nvme01 osdsPerDevice: "2"
The underlying storage format to use for Ceph OSDs is BlueStore.
The
metadataDevice
parameter accepts a device name or logical volume path for the BlueStore device. Mirantis recommends using logical volume paths created onnvme
devices. For devices partitioning on logical volumes, see Create a custom bare metal host profile.The
osdsPerDevice
parameter accepts the string-type natural numbers and allows splitting one device on several Ceph OSD daemons. Mirantis recommends using this parameter only forssd
ornvme
disks.
crush
Specifies the explicit key-value CRUSH topology for a node. For details, see Ceph official documentation: CRUSH maps. Includes the following parameters:
datacenter
- a physical data center that consists of rooms and handles data.room
- a room that accommodates one or more racks with hosts.pdu
- a power distribution unit (PDU) device that has multiple outputs and distributes electric power to racks located within a data center.row
- a row of computing racks insideroom
.rack
- a computing rack that accommodates one or more hosts.chassis
- a bare metal structure that houses or physically assembles hosts.region
- the geographic location of one or more Ceph Object instances within one or more zones.zone
- a logical group that consists of one or more Ceph Object instances.
Example configuration:
crush: datacenter: dc1 room: room1 pdu: pdu1 row: row1 rack: rack1 chassis: ch1 region: region1 zone: zone1
¶ Parameter
Description
spec
Specifies a Ceph node specification. For the entire spec, see Node parameters.
nodes
Specifies a list of names of machines to which the Ceph node
spec
must be applied. Mutually exclusive with thelabel
parameter. For example:nodeGroups: group-1: spec: <node spec> nodes: - master-0 - master-1 - worker-0
label
Specifies a string with a valid label selector expression to select machines to which the node spec must be applied. Mutually exclusive with
nodes
parameter. For example:nodeGroup: group-2: spec: <node spec> label: "ceph-storage-node=true,!ceph-control-node"
¶ Parameter
Description
name
Specifies the pool name as a prefix for each Ceph block pool. The resulting Ceph block pool name will be
<name>-<deviceClass>
.useAsFullName
Enables Ceph block pool to use only the
name
value as a name. The resulting Ceph block pool name will be<name>
without thedeviceClass
suffix.role
Specifies the pool role and is used mostly for Mirantis OpenStack for Kubernetes (MOSK) pools.
default
Defines if the pool and dependent StorageClass should be set as default. Must be enabled only for one pool.
deviceClass
Specifies the device class for the defined pool. Possible values are HDD, SSD, and NVMe.
replicated
The
replicated
parameter is mutually exclusive witherasureCoded
and includes the following parameters:size
- the number of pool replicas.targetSizeRatio
- Optional. A float percentage from0.0
to1.0
, which specifies the expected consumption of the total Ceph cluster capacity. The default values are as follows:The default ratio of the Ceph Object Storage
dataPool
is 10.0%.For the pools ratio for MOSK, see MOSK Deployment Guide: Deploy a Ceph cluster.
erasureCoded
Enables the erasure-coded pool. For details, see Rook documentation: Erasure coded and Ceph documentation: Erasure coded pool. The
erasureCoded
parameter is mutually exclusive withreplicated
.failureDomain
The failure domain across which the replicas or chunks of data will be spread. Set to
host
by default. The list of possible recommended values includes:host
,rack
,room
, anddatacenter
.Caution
Mirantis does not recommend using the following intermediate topology keys:
pdu
,row
,chassis
. Consider therack
topology instead. Theosd
failure domain is prohibited.mirroring
Optional. Enables the mirroring feature for the defined pool. Includes the
mode
parameter that can be set topool
orimage
. For details, see Enable Ceph RBD mirroring.allowVolumeExpansion
Optional. Not updatable as it applies only once. Enables expansion of persistent volumes based on
StorageClass
of a corresponding pool. For details, see Kubernetes documentation: Resizing persistent volumes using Kubernetes.Note
A Kubernetes cluster only supports increase of storage size.
rbdDeviceMapOptions
Optional. Not updatable as it applies only once. Specifies custom
rbd device map
options to use withStorageClass
of a corresponding pool. Allows customizing the Kubernetes CSI driver interaction with Ceph RBD for the definedStorageClass
. For the available options, see Ceph documentation: Kernel RBD (KRBD) options.parameters
Optional. Available since Container Cloud 2.22.0. Specifies the key-value map for the parameters of the Ceph pool. For details, see Ceph documentation: Set Pool values.
reclaimPolicy
Optional. Available since Container Cloud 2.25.0. Specifies reclaim policy for the underlying
StorageClass
of the pool. AcceptsRetain
andDelete
values. Default isDelete
if not set.Example configuration:
pools: - name: kubernetes role: kubernetes deviceClass: hdd replicated: size: 3 targetSizeRatio: 10.0 default: true
To configure additional required pools for MOSK, see MOSK Deployment Guide: Deploy a Ceph cluster.
Caution
Since Ceph Pacific, Ceph CSI driver does not propagate the
777
permission on the mount point of persistent volumes based on anyStorageClass
of the Ceph pool.¶ Parameter
Description
name
Ceph client name.
caps
Key-value parameter with Ceph client capabilities. For details about
caps
, refer to Ceph documentation: Authorization (capabilities).Example configuration:
clients: - name: glance caps: mon: allow r, allow command "osd blacklist" osd: profile rbd pool=images
¶ Parameter
Description
name
Ceph Object Storage instance name.
dataPool
Mutually exclusive with the
zone
parameter. Object storage data pool spec that should only containreplicated
orerasureCoded
andfailureDomain
parameters. ThefailureDomain
parameter may be set toosd
orhost
, defining the failure domain across which the data will be spread. FordataPool
, Mirantis recommends using anerasureCoded
pool. For details, see Rook documentation: Erasure coding. For example:cephClusterSpec: objectStorage: rgw: dataPool: erasureCoded: codingChunks: 1 dataChunks: 2
metadataPool
Mutually exclusive with the
zone
parameter. Object storage metadata pool spec that should only containreplicated
andfailureDomain
parameters. ThefailureDomain
parameter may be set toosd
orhost
, defining the failure domain across which the data will be spread. Can use onlyreplicated
settings. For example:cephClusterSpec: objectStorage: rgw: metadataPool: replicated: size: 3 failureDomain: host
where
replicated.size
is the number of full copies of data on multiple nodes.Warning
When using the non-recommended Ceph pools
replicated.size
of less than3
, Ceph OSD removal cannot be performed. The minimal replica size equals a rounded up half of the specifiedreplicated.size
.For example, if
replicated.size
is2
, the minimal replica size is1
, and ifreplicated.size
is3
, then the minimal replica size is2
. The replica size of1
allows Ceph having PGs with only one Ceph OSD in theacting
state, which may cause aPG_TOO_DEGRADED
health warning that blocks Ceph OSD removal. Mirantis recommends settingreplicated.size
to3
for each Ceph pool.gateway
The gateway settings corresponding to the
rgw
daemon settings. Includes the following parameters:port
- the port on which the Ceph RGW service will be listening on HTTP.securePort
- the port on which the Ceph RGW service will be listening on HTTPS.instances
- the number of pods in the Ceph RGW ReplicaSet. IfallNodes
is set totrue
, a DaemonSet is created instead.Note
Mirantis recommends using 2 instances for Ceph Object Storage.
allNodes
- defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. Theinstances
parameter is ignored ifallNodes
is set totrue
.
For example:
cephClusterSpec: objectStorage: rgw: gateway: allNodes: false instances: 1 port: 80 securePort: 8443
preservePoolsOnDelete
Defines whether to delete the data and metadata pools in the
rgw
section if the object storage is deleted. Set this parameter totrue
if you need to store data even if the object storage is deleted. However, Mirantis recommends setting this parameter tofalse
.objectUsers
andbuckets
Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.
objectUsers
- a list of user specifications to create for object storage. Contains the following fields:name
- a user name to create.displayName
- the Ceph user name to display.capabilities
- user capabilities:user
- admin capabilities to read/write Ceph Object Store users.bucket
- admin capabilities to read/write Ceph Object Store buckets.metadata
- admin capabilities to read/write Ceph Object Store metadata.usage
- admin capabilities to read/write Ceph Object Store usage.zone
- admin capabilities to read/write Ceph Object Store zones.
The available options are
*
,read
,write
,read, write
. For details, see Ceph documentation: Add/remove admin capabilities.quotas
- user quotas:maxBuckets
- the maximum bucket limit for the Ceph user. Integer, for example,10
.maxSize
- the maximum size limit of all objects across all the buckets of a user. String size, for example,10G
.maxObjects
- the maximum number of objects across all buckets of a user. Integer, for example,10
.
For example:
objectUsers: - capabilities: bucket: '*' metadata: read user: read displayName: test-user name: test-user quotas: maxBuckets: 10 maxSize: 10G
users
- a list of strings that contain user names to create for object storage.Note
This field is deprecated. Use
objectUsers
instead. Ifusers
is specified, it will be automatically transformed to theobjectUsers
section.buckets
- a list of strings that contain bucket names to create for object storage.
zone
Optional. Mutually exclusive with
metadataPool
anddataPool
. Defines the Ceph Multisite zone where the object storage must be placed. Includes thename
parameter that must be set to one of thezones
items. For details, see Enable multisite for Ceph RGW Object Storage.For example:
cephClusterSpec: objectStorage: multisite: zones: - name: master-zone ... rgw: zone: name: master-zone
SSLCert
Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.
For example:
cephClusterSpec: objectStorage: rgw: SSLCert: cacert: | -----BEGIN CERTIFICATE----- ca-certificate here -----END CERTIFICATE----- tlsCert: | -----BEGIN CERTIFICATE----- private TLS certificate here -----END CERTIFICATE----- tlsKey: | -----BEGIN RSA PRIVATE KEY----- private TLS key here -----END RSA PRIVATE KEY-----
For configuration example, see Enable Ceph RGW Object Storage.
¶ Parameter
Description
deviceLabels
Available since Cluster releases 17.0.0 and 16.0.0. A key-value setting used to assign a specification label to any available device on a specific node. These labels can then be utilized within
nodeGroups
or node definitions to eliminate the need to specify different devices for each node individually. Additionally, it helps in avoiding the use of device names, facilitating the grouping of nodes with similar labels.Usage:
extraOpts: deviceLabels: <node-name>: <dev-label>: /dev/disk/by-id/<unique_ID> ... <node-name-n>: <dev-label-n>: /dev/disk/by-id/<unique_ID> nodesGroup: <group-name>: spec: storageDevices: - devLabel: <dev_label> - devLabel: <dev_label_n> nodes: - <node_name> - <node_name_n>
Before Cluster releases 17.0.0 and 16.0.0, you need to specify the device labels for each node separately:
nodes: <node-name>: - storageDevices: - fullPath: /dev/disk/by-id/<unique_ID> <node-name-n>: - storageDevices: - fullPath: /dev/disk/by-id/<unique_ID>
customDeviceClasses
Available since Cluster releases 17.1.0 and 16.1.0 as TechPreview. A list of custom device class names to use in the specification. Enables you to specify the custom names different from the default ones, which include
ssd
,hdd
, andnvme
, and use them in nodes and pools definitions.Usage:
extraOpts: customDeviceClasses: - <custom_class_name> nodes: kaas-node-5bgk6: storageDevices: - config: # existing item deviceClass: <custom_class_name> fullPath: /dev/disk/by-id/<unique_ID> pools: - default: false deviceClass: <custom_class_name> erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host
Before Cluster releases 17.1.0 and 16.1.0, you cannot specify custom class names in the specification.
¶ Parameter
Description
realms
Technical PreviewList of realms to use, represents the realm namespaces. Includes the following parameters:
name
- the realm name.pullEndpoint
- optional, required only when the master zone is in a different storage cluster. The endpoint, access key, and system key of the system user from the realm to pull from. Includes the following parameters:endpoint
- the endpoint of the master zone in the master zone group.accessKey
- the access key of the system user from the realm to pull from.secretKey
- the system key of the system user from the realm to pull from.
zoneGroups
Technical PreviewThe list of zone groups for realms. Includes the following parameters:
name
- the zone group name.realmName
- the realm namespace name to which the zone group belongs to.
zones
Technical PreviewThe list of zones used within one zone group. Includes the following parameters:
name
- the zone name.metadataPool
- the settings used to create the Object Storage metadata pools. Must use replication. For details, see Pool parameters.dataPool
- the settings to create the Object Storage data pool. Can use replication or erasure coding. For details, see Pool parameters.zoneGroupName
- the zone group name.endpointsForZone
- available since Container Cloud 2.27.0 (Cluster releases 17.2.0 and 16.2.0). The list of all endpoints in the zone group. If you use ingress proxy for RGW, the list of endpoints must contain that FQDN/IP address to access RGW. By default, if no ingress proxy is used, the list of endpoints is set to the IP address of the RGW external service. Endpoints must follow the HTTP URL format.
For configuration example, see Enable multisite for Ceph RGW Object Storage.
¶ Parameter
Description
daemonHealth
Specifies health check settings for Ceph daemons. Contains the following parameters:
status
- configures health check settings for Ceph healthmon
- configures health check settings for Ceph Monitorsosd
- configures health check settings for Ceph OSDs
Each parameter allows defining the following settings:
disabled
- a flag that disables the health check.interval
- an interval in seconds or minutes for the health check to run. For example,60s
for 60 seconds.timeout
- a timeout for the health check in seconds or minutes. For example,60s
for 60 seconds.
livenessProbe
Key-value parameter with liveness probe settings for the defined daemon types. Can be one of the following:
mgr
,mon
,osd
, ormds
. Includes thedisabled
flag and theprobe
parameter. Theprobe
parameter accepts the following options:initialDelaySeconds
- the number of seconds after the container has started before the liveness probes are initiated. Integer.timeoutSeconds
- the number of seconds after which the probe times out. Integer.periodSeconds
- the frequency (in seconds) to perform the probe. Integer.successThreshold
- the minimum consecutive successful probes for the probe to be considered successful after a failure. Integer.failureThreshold
- the minimum consecutive failures for the probe to be considered failed after having succeeded. Integer.
Note
Ceph Controller specifies the following
livenessProbe
defaults formon
,mgr
,osd
, andmds
(if CephFS is enabled):5
fortimeoutSeconds
5
forfailureThreshold
startupProbe
Key-value parameter with startup probe settings for the defined daemon types. Can be one of the following:
mgr
,mon
,osd
, ormds
. Includes thedisabled
flag and theprobe
parameter. Theprobe
parameter accepts the following options:timeoutSeconds
- the number of seconds after which the probe times out. Integer.periodSeconds
- the frequency (in seconds) to perform the probe. Integer.successThreshold
- the minimum consecutive successful probes for the probe to be considered successful after a failure. Integer.failureThreshold
- the minimum consecutive failures for the probe to be considered failed after having succeeded. Integer.
Example configuration
healthCheck: daemonHealth: mon: disabled: false interval: 45s timeout: 600s osd: disabled: false interval: 60s status: disabled: true livenessProbe: mon: disabled: false probe: timeoutSeconds: 10 periodSeconds: 3 successThreshold: 3 mgr: disabled: false probe: timeoutSeconds: 5 failureThreshold: 5 osd: probe: initialDelaySeconds: 5 timeoutSeconds: 10 failureThreshold: 7 startupProbe: mon: disabled: true mgr: probe: successThreshold: 3
Select from the following options:
If you are creating a managed cluster, save the updated
KaaSCephCluster
template to the corresponding file and proceed with the managed cluster creation.If you are configuring
KaaSCephCluster
of an existing managed cluster, exit the text editor to apply the change.