Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!
Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.
Enable Ceph RGW Object Storage¶
Warning
This procedure is valid for MOSK clusters that use the MiraCeph
custom
resource (CR), which is available since MOSK 25.2 to replace the deprecated
KaaSCephCluster
. For the equivalent procedure with the KaaSCephCluster
CR, refer to the following section:
Ceph Controller enables you to deploy RADOS Gateway (RGW) Object Storage instances and automatically manage its resources such as users and buckets. Ceph Object Storage has an integration with OpenStack Object Storage (Swift) in MOSK.
To enable the RGW Object Storage:
Open the
MiraCeph
CR on a MOSK cluster for editing:kubectl -n ceph-lcm-mirantis edit miraceph
Using the following table, update the
objectStorage.rgw
section specification as required:Caution
Explicitly specify the
deviceClass
parameter fordataPool
andmetadataPool
.RADOS Gateway parameters¶ Parameter
Description
name
Required. Ceph Object Storage instance name.
dataPool
Required if
zone:name
is not specified. Mutually exclusive withzone
. Must be used together withmetadataPool
.Object storage data pool spec that must only contain
replicated
orerasureCoded
andfailureDomain
parameters. ThefailureDomain
parameter may be set tohost
,rack
,room
, ordatacenter
, defining the failure domain across which the data will be spread. ThedeviceClass
must be explicitly defined. FordataPool
, Mirantis recommends using anerasureCoded
pool. For details, see Rook documentation: Erasure coding. For example:rgw: dataPool: deviceClass: hdd failureDomain: host erasureCoded: codingChunks: 1 dataChunks: 2
metadataPool
Required if
zone:name
is not specified. Mutually exclusive withzone
. Must be used together withdataPool
. Object storage metadata pool spec that must only containreplicated
andfailureDomain
parameters. ThefailureDomain
parameter may be set tohost
,rack
,room
, ordatacenter
, defining the failure domain across which the data will be spread. ThedeviceClass
must be explicitly defined. Can use onlyreplicated
settings. For example:rgw: metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3
where
replicated.size
is the number of full copies of data on multiple nodes.Warning
When using the non-recommended Ceph pools
replicated.size
of less than3
, Ceph OSD removal cannot be performed. The minimal replica size equals a rounded up half of the specifiedreplicated.size
.For example, if
replicated.size
is2
, the minimal replica size is1
, and ifreplicated.size
is3
, then the minimal replica size is2
. The replica size of1
allows Ceph having PGs with only one Ceph OSD in theacting
state, which may cause aPG_TOO_DEGRADED
health warning that blocks Ceph OSD removal. Mirantis recommends settingreplicated.size
to3
for each Ceph pool.gateway
Required. The gateway settings corresponding to the
rgw
daemon settings. Includes the following parameters:port
- the port on which the Ceph RGW service will be listening on HTTP.securePort
- the port on which the Ceph RGW service will be listening on HTTPS.instances
- the number of pods in the Ceph RGW ReplicaSet. IfallNodes
is set totrue
, a DaemonSet is created instead.Note
Mirantis recommends using 3 instances for Ceph Object Storage.
allNodes
- defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. Theinstances
parameter is ignored ifallNodes
is set totrue
.
For example:
rgw: gateway: allNodes: false instances: 3 port: 80 securePort: 8443
preservePoolsOnDelete
Optional. Defines whether to delete the data and metadata pools in the
rgw
section if the object storage is deleted. Set this parameter totrue
if you need to store data even if the object storage is deleted. However, Mirantis recommends setting this parameter tofalse
.objectUsers
andbuckets
Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.
objectUsers
- a list of user specifications to create for object storage. Contains the following fields:name
- a user name to create.displayName
- the Ceph user name to display.capabilities
- user capabilities:user
- admin capabilities to read/write Ceph Object Store users.bucket
- admin capabilities to read/write Ceph Object Store buckets.metadata
- admin capabilities to read/write Ceph Object Store metadata.usage
- admin capabilities to read/write Ceph Object Store usage.zone
- admin capabilities to read/write Ceph Object Store zones.
The available options are
*
,read
,write
,read, write
. For details, see Ceph documentation: Add/remove admin capabilities.quotas
- user quotas:maxBuckets
- the maximum bucket limit for the Ceph user. Integer, for example,10
.maxSize
- the maximum size limit of all objects across all the buckets of a user. String size, for example,10G
.maxObjects
- the maximum number of objects across all buckets of a user. Integer, for example,10
.
For example:
objectUsers: - capabilities: bucket: '*' metadata: read user: read displayName: test-user name: test-user quotas: maxBuckets: 10 maxSize: 10G
buckets
- a list of strings that contain bucket names to create for object storage.
zone
Required if
dataPool
andmetadataPool
are not specified. Mutually exclusive with these parameters. Defines the Ceph Multisite zone where the object storage must be placed. Includes thename
parameter that must be set to one of thezones
items. For details, see the Enable multisite for Ceph RGW Object Storage procedure depending on the Ceph custom resource being used: MiraCeph or KaaSCephCluster.For example:
objectStorage: multisite: zones: - name: master-zone ... rgw: zone: name: master-zone
SSLCert
Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.
For example:
objectStorage: rgw: SSLCert: cacert: | -----BEGIN CERTIFICATE----- ca-certificate here -----END CERTIFICATE----- tlsCert: | -----BEGIN CERTIFICATE----- private TLS certificate here -----END CERTIFICATE----- tlsKey: | -----BEGIN RSA PRIVATE KEY----- private TLS key here -----END RSA PRIVATE KEY-----
SSLCertInRef
Optional. Available since MOSK 25.1. Flag to determine that a TLS certificate for accessing the Ceph RGW endpoint is used but not exposed in
spec
. For example:objectStorage: rgw: SSLCertInRef: true
The operator must manually provide TLS configuration using the
rgw-ssl-certificate
secret in therook-ceph
namespace of the managed cluster. The secret object must have the following structure:data: cacert: <base64encodedCaCertificate> cert: <base64encodedCertificate>
When removing an already existing
SSLCert
block, no additional actions are required, because this block uses the samergw-ssl-certificate
secret in therook-ceph
namespace.When adding a new secret directly without exposing it in
spec
, the following rules apply:cert
- base64 representation of a file with the server TLS key, server TLS cert, and cacert.cacert
- base64 representation of a cacert only.
For example:
rgw: name: rgw-store dataPool: deviceClass: hdd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3 gateway: allNodes: false instances: 3 port: 80 securePort: 8443 preservePoolsOnDelete: false