Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly Container Cloud). This means everything you need is in one place. Some legacy names may remain in the code and documentation and will be updated in future releases. The separate Container Cloud documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Enable Ceph RGW Object Storage¶
Warning
This procedure is valid for MOSK clusters that use the deprecated
KaaSCephCluster custom resource (CR) instead of the MiraCeph CR that is
available since MOSK 25.2 as a new Ceph configuration entrypoint. For the
equivalent procedure with the MiraCeph CR, refer to the following section:
Ceph Controller enables you to deploy RADOS Gateway (RGW) Object Storage instances and automatically manage its resources such as users and buckets. Ceph Object Storage has an integration with OpenStack Object Storage (Swift) in MOSK.
To enable the RGW Object Storage:
Open the
KaasCephClusterCR of a managed cluster for editing:kubectl edit kaascephcluster -n <managedClusterProjectName>
Substitute
<managedClusterProjectName>with a corresponding value.Using the following table, update the
cephClusterSpec.objectStorage.rgwsection specification as required:Caution
Since MCC 2.24.0 (Cluster releases 15.0.1 and 14.0.1), explicitly specify the
deviceClassparameter fordataPoolandmetadataPool.Warning
Since Container Cloud 2.6.0, the
spec.rgwsection is deprecated and its parameters are moved underobjectStorage.rgw. If you continue usingspec.rgw, it is automatically translated intoobjectStorage.rgwduring the Container Cloud update to 2.6.0.We strongly recommend changing
spec.rgwtoobjectStorage.rgwin allKaaSCephClusterCRs beforespec.rgwbecomes unsupported and is deleted.RADOS Gateway parameters¶ Parameter
Description
nameRequired. Ceph Object Storage instance name.
dataPoolRequired if
zone:nameis not specified. Mutually exclusive withzone. Must be used together withmetadataPool.Object storage data pool spec that must only contain
replicatedorerasureCodedandfailureDomainparameters. ThefailureDomainparameter may be set tohost,rack,room, ordatacenter, defining the failure domain across which the data will be spread. ThedeviceClassmust be explicitly defined. FordataPool, Mirantis recommends using anerasureCodedpool. For details, see Rook documentation: Erasure coding. For example:rgw: dataPool: deviceClass: hdd failureDomain: host erasureCoded: codingChunks: 1 dataChunks: 2
metadataPoolRequired if
zone:nameis not specified. Mutually exclusive withzone. Must be used together withdataPool. Object storage metadata pool spec that must only containreplicatedandfailureDomainparameters. ThefailureDomainparameter may be set tohost,rack,room, ordatacenter, defining the failure domain across which the data will be spread. ThedeviceClassmust be explicitly defined. Can use onlyreplicatedsettings. For example:rgw: metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3
where
replicated.sizeis the number of full copies of data on multiple nodes.Warning
When using the non-recommended Ceph pools
replicated.sizeof less than3, Ceph OSD removal cannot be performed. The minimal replica size equals a rounded up half of the specifiedreplicated.size.For example, if
replicated.sizeis2, the minimal replica size is1, and ifreplicated.sizeis3, then the minimal replica size is2. The replica size of1allows Ceph having PGs with only one Ceph OSD in theactingstate, which may cause aPG_TOO_DEGRADEDhealth warning that blocks Ceph OSD removal. Mirantis recommends settingreplicated.sizeto3for each Ceph pool.gatewayRequired. The gateway settings corresponding to the
rgwdaemon settings. Includes the following parameters:port- the port on which the Ceph RGW service will be listening on HTTP.securePort- the port on which the Ceph RGW service will be listening on HTTPS.instances- the number of pods in the Ceph RGW ReplicaSet. IfallNodesis set totrue, a DaemonSet is created instead.Note
Mirantis recommends using 3 instances for Ceph Object Storage.
allNodes- defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. Theinstancesparameter is ignored ifallNodesis set totrue.
For example:
rgw: gateway: allNodes: false instances: 3 port: 80 securePort: 8443
preservePoolsOnDeleteOptional. Defines whether to delete the data and metadata pools in the
rgwsection if the object storage is deleted. Set this parameter totrueif you need to store data even if the object storage is deleted. However, Mirantis recommends setting this parameter tofalse.objectUsersandbucketsOptional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.
objectUsers- a list of user specifications to create for object storage. Contains the following fields:name- a user name to create.displayName- the Ceph user name to display.capabilities- user capabilities:user- admin capabilities to read/write Ceph Object Store users.bucket- admin capabilities to read/write Ceph Object Store buckets.metadata- admin capabilities to read/write Ceph Object Store metadata.usage- admin capabilities to read/write Ceph Object Store usage.zone- admin capabilities to read/write Ceph Object Store zones.
The available options are
*,read,write,read, write. For details, see Ceph documentation: Add/remove admin capabilities.quotas- user quotas:maxBuckets- the maximum bucket limit for the Ceph user. Integer, for example,10.maxSize- the maximum size limit of all objects across all the buckets of a user. String size, for example,10G.maxObjects- the maximum number of objects across all buckets of a user. Integer, for example,10.
For example:
objectUsers: - capabilities: bucket: '*' metadata: read user: read displayName: test-user name: test-user quotas: maxBuckets: 10 maxSize: 10G
buckets- a list of strings that contain bucket names to create for object storage.
zoneRequired if
dataPoolandmetadataPoolare not specified. Mutually exclusive with these parameters. Defines the Ceph Multisite zone where the object storage must be placed. Includes thenameparameter that must be set to one of thezonesitems. For details, see the Enable multisite for Ceph RGW Object Storage procedure depending on the Ceph custom resource being used: MiraCeph or KaaSCephCluster.For example:
objectStorage: multisite: zones: - name: master-zone ... rgw: zone: name: master-zone
SSLCertOptional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.
For example:
objectStorage: rgw: SSLCert: cacert: | -----BEGIN CERTIFICATE----- ca-certificate here -----END CERTIFICATE----- tlsCert: | -----BEGIN CERTIFICATE----- private TLS certificate here -----END CERTIFICATE----- tlsKey: | -----BEGIN RSA PRIVATE KEY----- private TLS key here -----END RSA PRIVATE KEY-----
SSLCertInRefOptional. Available since MOSK 25.1. Flag to determine that a TLS certificate for accessing the Ceph RGW endpoint is used but not exposed in
spec. For example:objectStorage: rgw: SSLCertInRef: true
The operator must manually provide TLS configuration using the
rgw-ssl-certificatesecret in therook-cephnamespace of the MOSK cluster. The secret object must have the following structure:data: cacert: <base64encodedCaCertificate> cert: <base64encodedCertificate>
When removing an already existing
SSLCertblock, no additional actions are required, because this block uses the samergw-ssl-certificatesecret in therook-cephnamespace.When adding a new secret directly without exposing it in
spec, the following rules apply:cert- base64 representation of a file with the server TLS key, server TLS cert, and cacert.cacert- base64 representation of a cacert only.
For example:
cephClusterSpec: objectStorage: rgw: name: rgw-store dataPool: deviceClass: hdd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3 gateway: allNodes: false instances: 3 port: 80 securePort: 8443 preservePoolsOnDelete: false