Enable Ceph RGW Object Storage¶
Ceph Controller enables you to deploy RADOS Gateway (RGW) Object Storage instances and automatically manage its resources such as users and buckets. Ceph Object Storage has an integration with OpenStack Object Storage (Swift) in Mirantis OpenStack for Kubernetes (MOSK).
To enable the RGW Object Storage:
Open the
KaasCephCluster
CR of a managed cluster for editing:kubectl edit kaascephcluster -n <managedClusterProjectName>
Substitute
<managedClusterProjectName>
with a corresponding value.Using the following table, update the
cephClusterSpec.objectStorage.rgw
section specification as required:Warning
Starting from Container Cloud 2.6.0, the
spec.rgw
section is deprecated and its parameters are moved underobjectStorage.rgw
. If you continue usingspec.rgw
, it is automatically translated intoobjectStorage.rgw
during the Container Cloud update to 2.6.0.We strongly recommend changing
spec.rgw
toobjectStorage.rgw
in allKaaSCephCluster
CRs beforespec.rgw
becomes unsupported and is deleted.RADOS Gateway parameters¶ Parameter
Description
name
Ceph Object Storage instance name.
dataPool
Mutually exclusive with the
zone
parameter. Object storage data pool spec that should only containreplicated
orerasureCoded
andfailureDomain
parameters. ThefailureDomain
parameter may be set toosd
orhost
, defining the failure domain across which the data will be spread. FordataPool
, Mirantis recommends using anerasureCoded
pool. For details, see Rook documentation: Erasure coding. For example:cephClusterSpec: objectStorage: rgw: dataPool: erasureCoded: codingChunks: 1 dataChunks: 2
metadataPool
Mutually exclusive with the
zone
parameter. Object storage metadata pool spec that should only containreplicated
andfailureDomain
parameters. ThefailureDomain
parameter may be set toosd
orhost
, defining the failure domain across which the data will be spread. Can use onlyreplicated
settings. For example:cephClusterSpec: objectStorage: rgw: metadataPool: replicated: size: 3 failureDomain: host
where
replicated.size
is the number of full copies of data on multiple nodes.gateway
The gateway settings corresponding to the
rgw
daemon settings. Includes the following parameters:port
- the port on which the Ceph RGW service will be listening on HTTP.securePort
- the port on which the Ceph RGW service will be listening on HTTPS.instances
- the number of pods in the Ceph RGW ReplicaSet. IfallNodes
is set totrue
, a DaemonSet is created instead.Note
Mirantis recommends using 2 instances for Ceph Object Storage.
allNodes
- defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. Theinstances
parameter is ignored ifallNodes
is set totrue
.
For example:
cephClusterSpec: objectStorage: rgw: gateway: allNodes: false instances: 1 port: 80 securePort: 8443
preservePoolsOnDelete
Defines whether to delete the data and metadata pools in the
rgw
section if the object storage is deleted. Set this parameter totrue
if you need to store data even if the object storage is deleted. However, Mirantis recommends setting this parameter tofalse
.users
andbuckets
Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.
users
- a list of strings that contain user names to create for object storage.buckets
- a list of strings that contain bucket names to create for object storage.
zone
Optional. Mutually exclusive with
metadataPool
anddataPool
. Defines the Ceph Multisite zone where the object storage must be placed. Includes thename
parameter that must be set to one of thezones
items. For details, see Enable Multisite for Ceph RGW Object Storage.For example:
cephClusterSpec: objectStorage: multisite: zones: - name: master-zone ... rgw: zone: name: master-zone
SSLCert
Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.
For example:
cephClusterSpec: objectStorage: rgw: SSLCert: cacert: | -----BEGIN CERTIFICATE----- ca-certificate here -----END CERTIFICATE----- tlsCert: | -----BEGIN CERTIFICATE----- private TLS certificate here -----END CERTIFICATE----- tlsKey: | -----BEGIN RSA PRIVATE KEY----- private TLS key here -----END RSA PRIVATE KEY-----
For example:
cephClusterSpec: objectStorage: rgw: name: rgw-store dataPool: erasureCoded: codingChunks: 1 dataChunks: 2 failureDomain: host metadataPool: failureDomain: host replicated: size: 3 gateway: allNodes: false instances: 1 port: 80 securePort: 8443 preservePoolsOnDelete: false