Enable Ceph RGW Object Storage¶
Ceph Controller enables you to deploy RADOS Gateway (RGW) Object Storage instances and automatically manage its resources such as users and buckets. Ceph Object Storage has an integration with OpenStack Object Storage (Swift) in Mirantis OpenStack for Kubernetes (MOSK).
To enable the RGW Object Storage:
Open the
KaasCephCluster
CR of a managed cluster for editing:kubectl edit kaascephcluster -n <managedClusterProjectName>
Substitute
<managedClusterProjectName>
with a corresponding value.Using the following table, update the
cephClusterSpec.objectStorage.rgw
section specification as required:Caution
Since Container Cloud 2.23.0, explicitly specify the
deviceClass
parameter fordataPool
andmetadataPool
.Warning
Since Container Cloud 2.6.0, the
spec.rgw
section is deprecated and its parameters are moved underobjectStorage.rgw
. If you continue usingspec.rgw
, it is automatically translated intoobjectStorage.rgw
during the Container Cloud update to 2.6.0.We strongly recommend changing
spec.rgw
toobjectStorage.rgw
in allKaaSCephCluster
CRs beforespec.rgw
becomes unsupported and is deleted.¶ Parameter
Description
name
Ceph Object Storage instance name.
dataPool
Mutually exclusive with the
zone
parameter. Object storage data pool spec that should only containreplicated
orerasureCoded
andfailureDomain
parameters. ThefailureDomain
parameter may be set toosd
orhost
, defining the failure domain across which the data will be spread. FordataPool
, Mirantis recommends using anerasureCoded
pool. For details, see Rook documentation: Erasure coding. For example:cephClusterSpec: objectStorage: rgw: dataPool: erasureCoded: codingChunks: 1 dataChunks: 2
metadataPool
Mutually exclusive with the
zone
parameter. Object storage metadata pool spec that should only containreplicated
andfailureDomain
parameters. ThefailureDomain
parameter may be set toosd
orhost
, defining the failure domain across which the data will be spread. Can use onlyreplicated
settings. For example:cephClusterSpec: objectStorage: rgw: metadataPool: replicated: size: 3 failureDomain: host
where
replicated.size
is the number of full copies of data on multiple nodes.Warning
When using the non-recommended Ceph pools
replicated.size
of less than3
, Ceph OSD removal cannot be performed. The minimal replica size equals a rounded up half of the specifiedreplicated.size
.For example, if
replicated.size
is2
, the minimal replica size is1
, and ifreplicated.size
is3
, then the minimal replica size is2
. The replica size of1
allows Ceph having PGs with only one Ceph OSD in theacting
state, which may cause aPG_TOO_DEGRADED
health warning that blocks Ceph OSD removal. Mirantis recommends settingreplicated.size
to3
for each Ceph pool.gateway
The gateway settings corresponding to the
rgw
daemon settings. Includes the following parameters:port
- the port on which the Ceph RGW service will be listening on HTTP.securePort
- the port on which the Ceph RGW service will be listening on HTTPS.instances
- the number of pods in the Ceph RGW ReplicaSet. IfallNodes
is set totrue
, a DaemonSet is created instead.Note
Mirantis recommends using 2 instances for Ceph Object Storage.
allNodes
- defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. Theinstances
parameter is ignored ifallNodes
is set totrue
.
For example:
cephClusterSpec: objectStorage: rgw: gateway: allNodes: false instances: 1 port: 80 securePort: 8443
preservePoolsOnDelete
Defines whether to delete the data and metadata pools in the
rgw
section if the object storage is deleted. Set this parameter totrue
if you need to store data even if the object storage is deleted. However, Mirantis recommends setting this parameter tofalse
.objectUsers
andbuckets
Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.
objectUsers
- a list of user specifications to create for object storage. Contains the following fields:name
- a user name to create.displayName
- the Ceph user name to display.capabilities
- user capabilities:user
- admin capabilities to read/write Ceph Object Store users.bucket
- admin capabilities to read/write Ceph Object Store buckets.metadata
- admin capabilities to read/write Ceph Object Store metadata.usage
- admin capabilities to read/write Ceph Object Store usage.zone
- admin capabilities to read/write Ceph Object Store zones.
The available options are
*
,read
,write
,read, write
. For details, see Ceph documentation: Add/remove admin capabilities.quotas
- user quotas:maxBuckets
- the maximum bucket limit for the Ceph user. Integer, for example,10
.maxSize
- the maximum size limit of all objects across all the buckets of a user. String size, for example,10G
.maxObjects
- the maximum number of objects across all buckets of a user. Integer, for example,10
.
For example:
objectUsers: - capabilities: bucket: '*' metadata: read user: read displayName: test-user name: test-user quotas: maxBuckets: 10 maxSize: 10G
users
- a list of strings that contain user names to create for object storage.Note
This field is deprecated. Use
objectUsers
instead. Ifusers
is specified, it will be automatically transformed to theobjectUsers
section.buckets
- a list of strings that contain bucket names to create for object storage.
zone
Optional. Mutually exclusive with
metadataPool
anddataPool
. Defines the Ceph Multisite zone where the object storage must be placed. Includes thename
parameter that must be set to one of thezones
items. For details, see Enable multisite for Ceph RGW Object Storage.For example:
cephClusterSpec: objectStorage: multisite: zones: - name: master-zone ... rgw: zone: name: master-zone
SSLCert
Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.
For example:
cephClusterSpec: objectStorage: rgw: SSLCert: cacert: | -----BEGIN CERTIFICATE----- ca-certificate here -----END CERTIFICATE----- tlsCert: | -----BEGIN CERTIFICATE----- private TLS certificate here -----END CERTIFICATE----- tlsKey: | -----BEGIN RSA PRIVATE KEY----- private TLS key here -----END RSA PRIVATE KEY-----
For example:
cephClusterSpec: objectStorage: rgw: name: rgw-store dataPool: deviceClass: hdd erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host metadataPool: deviceClass: hdd failureDomain: host replicated: size: 3 gateway: allNodes: false instances: 1 port: 80 securePort: 8443 preservePoolsOnDelete: false