Ceph advanced configuration¶
This section describes how to configure a Ceph cluster through the
KaaSCephCluster
(kaascephclusters.kaas.mirantis.com
) CR during or
after the deployment of a MOSK cluster.
The KaaSCephCluster
CR spec has two sections, cephClusterSpec
and
k8sCluster
and specifies the nodes to deploy as Ceph components. Based on
the roles definitions in the KaaSCephCluster
CR, Ceph Controller
automatically labels nodes for Ceph Monitors and Managers. Ceph OSDs are
deployed based on the storageDevices
parameter defined for each Ceph node.
For a default KaaSCephCluster
CR, see Container Cloud documentation:
Example of a complete L2 templates configuration for cluster creation.
Configure a Ceph cluster¶
Select from the following options:
If you do not have a cluster yet, open
kaascephcluster.yaml.template
for editing.If the cluster is already deployed, open the
KaasCephCluster
CR for editing:kubectl edit kaascephcluster -n <ClusterProjectName>
Substitute
<ClusterProjectName>
with a corresponding value.
Using the tables below, configure the Ceph cluster as required.
Select from the following options:
If you are creating a cluster, save the updated
KaaSCephCluster
template to the corresponding file and proceed with the cluster creation.If you are configuring
KaaSCephCluster
of an existing cluster, exit the text editor to apply the change.
Ceph configuration options¶
Parameter |
Description |
---|---|
|
Describes a Ceph cluster in the MOSK cluster. For details
on |
|
Defines the cluster on which the spec:
k8sCluster:
name: kaas-mgmt
namespace: default
|
Parameter |
Description |
---|---|
|
Specifies networks for the Ceph cluster:
|
|
Specifies the list of Ceph nodes. For details, see
Node parameters. The nodes:
master-0:
<node spec>
master-1:
<node spec>
...
worker-0:
<node spec>
|
|
Specifies the list of Ceph nodes grouped by node lists or node
labels. For details, see NodeGroups parameters. The nodes:
group-1:
spec: <node spec>
nodes: ["master-0", "master-1"]
group-2:
spec: <node spec>
label: <nodeLabelExpression>
...
group-3:
spec: <node spec>
nodes: ["worker-2", "worker-3"]
The |
|
Specifies the list of Ceph pools. For details, see Pool parameters. |
|
Specifies the parameters for Object Storage, such as RADOS Gateway, the Ceph Object Storage. Also specifies the RADOS Gateway Multisite configuration. For details, see RADOS Gateway parameters and Multisite parameters. |
|
Optional. String key-value parameter that allows overriding Ceph configuration options. Since MOSK 24.2, use the The use of this option enables restart of only specific daemons related
to the corresponding section. If you do not specify the section,
a parameter is set in the For example: rookConfig:
"osd_max_backfills": "64"
"mon|mon_health_to_clog": "true"
"osd|osd_journal_size": "8192"
"osd.14|osd_journal_size": "6250"
|
|
Available since MOSK 23.3. Enables specification of
extra options for a setup, includes the |
|
Enables a custom ingress rule for public access on Ceph services, for example, Ceph RADOS Gateway. For details, see Enable TLS for Ceph public endpoints. |
|
Enables pools mirroring between two interconnected clusters. For details, see Enable Ceph RBD mirroring. |
|
List of Ceph clients. For details, see Clients parameters. |
|
Disables autogeneration of shared Ceph values for OpenStack
deployments. Set to |
|
Contains the
For example: mgr:
mgrModules:
- name: balancer
enabled: true
- name: pg_autoscaler
enabled: true
The Note Most Ceph Manager modules require additional configuration
that you can perform through the |
|
Configures health checks and liveness probe settings for Ceph daemons. For details, see HealthCheck parameters.
Example configurationspec:
cephClusterSpec:
network:
clusterNet: 10.10.10.0/24
publicNet: 10.10.11.0/24
nodes:
master-0:
<node spec>
...
pools:
- <pool spec>
...
rookConfig:
"mon max pg per osd": "600"
...
|
Parameter |
Description |
---|---|
|
Specifies the
If a Ceph node contains a If a Ceph node contains a
|
|
Specifies the list of devices to use for Ceph OSD deployment. Includes the following parameters: Note Since MOSK 23.3, Mirantis recommends migrating
all For details, refer to Container Cloud documentation: Addressing storage devices.
|
|
Specifies the explicit key-value CRUSH topology for a node. For details, see Ceph official documentation: CRUSH maps. Includes the following parameters:
Example configuration: crush:
datacenter: dc1
room: room1
pdu: pdu1
row: row1
rack: rack1
chassis: ch1
region: region1
zone: zone1
|
Parameter |
Description |
---|---|
|
Specifies a Ceph node specification. For the entire spec, see Node parameters. |
|
Specifies a list of names of machines to which the Ceph node nodeGroups:
group-1:
spec: <node spec>
nodes:
- master-0
- master-1
- worker-0
|
|
Specifies a string with a valid label selector expression to
select machines to which the node spec must be applied. Mutually
exclusive with nodeGroup:
group-2:
spec: <node spec>
label: "ceph-storage-node=true,!ceph-control-node"
|
Parameter |
Description |
---|---|
|
Mandatory. Specifies the pool name as a prefix for each Ceph block pool. The
resulting Ceph block pool name will be |
|
Optional. Enables Ceph block pool to use only the |
|
Mandatory. Specifies the pool role and is used mostly for (MOSK) pools. |
|
Mandatory. Defines if the pool and dependent StorageClass should be set as default. Must be enabled only for one pool. |
|
Mandatory. Specifies the device class for the defined pool. Possible values are HDD, SSD, and NVMe. |
|
Mandatory, mutually exclusive with
|
|
Mandatory, mutually exclusive with |
|
Mandatory. The failure domain across which the replicas or chunks of data will
be spread. Set to Caution Mirantis does not recommend using the following
intermediate topology keys: |
|
Optional. Enables the mirroring feature for the defined pool.
Includes the |
|
Optional. Not updatable as it applies only once. Enables expansion of
persistent volumes based on Note A Kubernetes cluster only supports increase of storage size. |
|
Optional. Not updatable as it applies only once. Specifies custom
|
|
Optional. Available since MOSK 23.1. Specifies the key-value map for the parameters of the Ceph pool. For details, see Ceph documentation: Set Pool values. |
|
Optional. Available since MOSK 23.3. Specifies reclaim
policy for the underlying |
Example configuration:
pools:
- name: kubernetes
role: kubernetes
deviceClass: hdd
replicated:
size: 3
targetSizeRatio: 10.0
default: true
To configure additional required pools for MOSK, see Add a Ceph cluster.
Caution
Since Ceph Pacific, Ceph CSI driver does not propagate the
777
permission on the mount point of persistent volumes based on any
StorageClass
of the Ceph pool.
Parameter |
Description |
---|---|
|
Ceph client name. |
|
Key-value parameter with Ceph client capabilities. For details
about |
Example configuration:
clients:
- name: glance
caps:
mon: allow r, allow command "osd blacklist"
osd: profile rbd pool=images
Parameter |
Description |
---|---|
|
Ceph Object Storage instance name. |
|
Mutually exclusive with the cephClusterSpec:
objectStorage:
rgw:
dataPool:
erasureCoded:
codingChunks: 1
dataChunks: 2
|
|
Mutually exclusive with the cephClusterSpec:
objectStorage:
rgw:
metadataPool:
replicated:
size: 3
failureDomain: host
where Warning When using the non-recommended Ceph pools For example, if |
|
The gateway settings corresponding to the
For example: cephClusterSpec:
objectStorage:
rgw:
gateway:
allNodes: false
instances: 1
port: 80
securePort: 8443
|
|
Defines whether to delete the data and metadata pools in the |
|
Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.
|
|
Optional. Mutually exclusive with For example: cephClusterSpec:
objectStorage:
multisite:
zones:
- name: master-zone
...
rgw:
zone:
name: master-zone
|
|
Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated. For example: cephClusterSpec:
objectStorage:
rgw:
SSLCert:
cacert: |
-----BEGIN CERTIFICATE-----
ca-certificate here
-----END CERTIFICATE-----
tlsCert: |
-----BEGIN CERTIFICATE-----
private TLS certificate here
-----END CERTIFICATE-----
tlsKey: |
-----BEGIN RSA PRIVATE KEY-----
private TLS key here
-----END RSA PRIVATE KEY-----
|
For configuration example, see Enable Ceph RGW Object Storage.
Parameter |
Description |
---|---|
|
Available since MOSK 23.3.
A key-value setting used to assign a specification label to any
available device on a specific node. These labels can then be
utilized within Usage: extraOpts:
deviceLabels:
<node-name>:
<dev-label>: /dev/disk/by-id/<unique_ID>
...
<node-name-n>:
<dev-label-n>: /dev/disk/by-id/<unique_ID>
nodesGroup:
<group-name>:
spec:
storageDevices:
- devLabel: <dev_label>
- devLabel: <dev_label_n>
nodes:
- <node_name>
- <node_name_n>
Before MOSK 23.3, you need to specify the device labels for each node separately: nodes:
<node-name>:
- storageDevices:
- fullPath: /dev/disk/by-id/<unique_ID>
<node-name-n>:
- storageDevices:
- fullPath: /dev/disk/by-id/<unique_ID>
|
|
Available since MOSK 23.3 as TechPreview.
A list of custom device class names to use in the specification.
Enables you to specify the custom names different from the default
ones, which include Usage: extraOpts:
customDeviceClasses:
- <custom_class_name>
nodes:
kaas-node-5bgk6:
storageDevices:
- config: # existing item
deviceClass: <custom_class_name>
fullPath: /dev/disk/by-id/<unique_ID>
pools:
- default: false
deviceClass: <custom_class_name>
erasureCoded:
codingChunks: 1
dataChunks: 2
failureDomain: host
Before MOSK 23.3, you cannot specify custom class names in the specification. |
Parameter |
Description |
---|---|
|
List of realms to use, represents the realm namespaces. Includes the following parameters:
|
|
The list of zone groups for realms. Includes the following parameters:
|
|
The list of zones used within one zone group. Includes the following parameters:
|
For configuration example, see Enable multisite for Ceph RGW Object Storage.
Parameter |
Description |
---|---|
|
Specifies health check settings for Ceph daemons. Contains the following parameters:
Each parameter allows defining the following settings:
|
|
Key-value parameter with liveness probe settings for the defined
daemon types. Can be one of the following:
Note Ceph Controller specifies the following
|
|
Key-value parameter with startup probe settings for the defined
daemon types. Can be one of the following:
|
Example configuration
healthCheck:
daemonHealth:
mon:
disabled: false
interval: 45s
timeout: 600s
osd:
disabled: false
interval: 60s
status:
disabled: true
livenessProbe:
mon:
disabled: false
probe:
timeoutSeconds: 10
periodSeconds: 3
successThreshold: 3
mgr:
disabled: false
probe:
timeoutSeconds: 5
failureThreshold: 5
osd:
probe:
initialDelaySeconds: 5
timeoutSeconds: 10
failureThreshold: 7
startupProbe:
mon:
disabled: true
mgr:
probe:
successThreshold: 3