Enable Ceph RGW Object Storage

Ceph Controller enables you to deploy RADOS Gateway (RGW) Object Storage instances and automatically manage its resources such as users and buckets. Ceph Object Storage has an integration with OpenStack Object Storage (Swift) in Mirantis OpenStack for Kubernetes (MOSK).

To enable the RGW Object Storage:

  1. Open the KaasCephCluster CR of a managed cluster for editing:

    kubectl edit kaascephcluster -n <managedClusterProjectName>
    

    Substitute <managedClusterProjectName> with a corresponding value.

  2. Using the following table, update the cephClusterSpec.objectStorage.rgw section specification as required:

    Caution

    Since Container Cloud 2.23.0, explicitly specify the deviceClass parameter for dataPool and metadataPool.

    Warning

    Since Container Cloud 2.6.0, the spec.rgw section is deprecated and its parameters are moved under objectStorage.rgw. If you continue using spec.rgw, it is automatically translated into objectStorage.rgw during the Container Cloud update to 2.6.0.

    We strongly recommend changing spec.rgw to objectStorage.rgw in all KaaSCephCluster CRs before spec.rgw becomes unsupported and is deleted.

    RADOS Gateway parameters

    Parameter

    Description

    name

    Ceph Object Storage instance name.

    dataPool

    Mutually exclusive with the zone parameter. Object storage data pool spec that should only contain replicated or erasureCoded and failureDomain parameters. The failureDomain parameter may be set to osd or host, defining the failure domain across which the data will be spread. For dataPool, Mirantis recommends using an erasureCoded pool. For details, see Rook documentation: Erasure coding. For example:

    cephClusterSpec:
      objectStorage:
        rgw:
          dataPool:
            erasureCoded:
              codingChunks: 1
              dataChunks: 2
    

    metadataPool

    Mutually exclusive with the zone parameter. Object storage metadata pool spec that should only contain replicated and failureDomain parameters. The failureDomain parameter may be set to osd or host, defining the failure domain across which the data will be spread. Can use only replicated settings. For example:

    cephClusterSpec:
      objectStorage:
        rgw:
          metadataPool:
            replicated:
              size: 3
            failureDomain: host
    

    where replicated.size is the number of full copies of data on multiple nodes.

    Warning

    When using the non-recommended Ceph pools replicated.size of less than 3, Ceph OSD removal cannot be performed. The minimal replica size equals a rounded up half of the specified replicated.size.

    For example, if replicated.size is 2, the minimal replica size is 1, and if replicated.size is 3, then the minimal replica size is 2. The replica size of 1 allows Ceph having PGs with only one Ceph OSD in the acting state, which may cause a PG_TOO_DEGRADED health warning that blocks Ceph OSD removal. Mirantis recommends setting replicated.size to 3 for each Ceph pool.

    gateway

    The gateway settings corresponding to the rgw daemon settings. Includes the following parameters:

    • port - the port on which the Ceph RGW service will be listening on HTTP.

    • securePort - the port on which the Ceph RGW service will be listening on HTTPS.

    • instances - the number of pods in the Ceph RGW ReplicaSet. If allNodes is set to true, a DaemonSet is created instead.

      Note

      Mirantis recommends using 2 instances for Ceph Object Storage.

    • allNodes - defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. The instances parameter is ignored if allNodes is set to true.

    For example:

    cephClusterSpec:
      objectStorage:
        rgw:
          gateway:
            allNodes: false
            instances: 1
            port: 80
            securePort: 8443
    

    preservePoolsOnDelete

    Defines whether to delete the data and metadata pools in the rgw section if the object storage is deleted. Set this parameter to true if you need to store data even if the object storage is deleted. However, Mirantis recommends setting this parameter to false.

    objectUsers and buckets

    Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.

    • objectUsers - a list of user specifications to create for object storage. Contains the following fields:

      • name - a user name to create.

      • displayName - the Ceph user name to display.

      • capabilities - user capabilities:

        • user - admin capabilities to read/write Ceph Object Store users.

        • bucket - admin capabilities to read/write Ceph Object Store buckets.

        • metadata - admin capabilities to read/write Ceph Object Store metadata.

        • usage - admin capabilities to read/write Ceph Object Store usage.

        • zone - admin capabilities to read/write Ceph Object Store zones.

        The available options are *, read, write, read, write. For details, see Ceph documentation: Add/remove admin capabilities.

      • quotas - user quotas:

        • maxBuckets - the maximum bucket limit for the Ceph user. Integer, for example, 10.

        • maxSize - the maximum size limit of all objects across all the buckets of a user. String size, for example, 10G.

        • maxObjects - the maximum number of objects across all buckets of a user. Integer, for example, 10.

        For example:

        objectUsers:
        - capabilities:
            bucket: '*'
            metadata: read
            user: read
          displayName: test-user
          name: test-user
          quotas:
            maxBuckets: 10
            maxSize: 10G
        
    • users - a list of strings that contain user names to create for object storage.

      Note

      This field is deprecated. Use objectUsers instead. If users is specified, it will be automatically transformed to the objectUsers section.

    • buckets - a list of strings that contain bucket names to create for object storage.

    zone

    Optional. Mutually exclusive with metadataPool and dataPool. Defines the Ceph Multisite zone where the object storage must be placed. Includes the name parameter that must be set to one of the zones items. For details, see Enable Multisite for Ceph RGW Object Storage.

    For example:

    cephClusterSpec:
      objectStorage:
        multisite:
          zones:
          - name: master-zone
          ...
        rgw:
          zone:
            name: master-zone
    

    SSLCert

    Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.

    For example:

    cephClusterSpec:
      objectStorage:
        rgw:
          SSLCert:
            cacert: |
              -----BEGIN CERTIFICATE-----
              ca-certificate here
              -----END CERTIFICATE-----
            tlsCert: |
              -----BEGIN CERTIFICATE-----
              private TLS certificate here
              -----END CERTIFICATE-----
            tlsKey: |
              -----BEGIN RSA PRIVATE KEY-----
              private TLS key here
              -----END RSA PRIVATE KEY-----
    

    For example:

    cephClusterSpec:
      objectStorage:
        rgw:
          name: rgw-store
          dataPool:
            deviceClass: hdd
            erasureCoded:
              codingChunks: 1
              dataChunks: 2
            failureDomain: host
          metadataPool:
            deviceClass: hdd
            failureDomain: host
            replicated:
              size: 3
          gateway:
            allNodes: false
            instances: 1
            port: 80
            securePort: 8443
          preservePoolsOnDelete: false