Mirantis Container Cloud (MCC) becomes part of Mirantis OpenStack for Kubernetes (MOSK)!

Starting with MOSK 25.2, the MOSK documentation set covers all product layers, including MOSK management (formerly MCC). This means everything you need is in one place. The separate MCC documentation site will be retired, so please update your bookmarks for continued easy access to the latest content.

Enable Ceph RGW Object Storage

Warning

This procedure is valid for MOSK clusters that use the MiraCeph custom resource (CR), which is available since MOSK 25.2 to replace the deprecated KaaSCephCluster. For the equivalent procedure with the KaaSCephCluster CR, refer to the following section:

Enable Ceph RGW Object Storage

Ceph Controller enables you to deploy RADOS Gateway (RGW) Object Storage instances and automatically manage its resources such as users and buckets. Ceph Object Storage has an integration with OpenStack Object Storage (Swift) in MOSK.

To enable the RGW Object Storage:

  1. Open the MiraCeph CR on a MOSK cluster for editing:

    kubectl -n ceph-lcm-mirantis edit miraceph
    
  2. Using the following table, update the objectStorage.rgw section specification as required:

    Caution

    Explicitly specify the deviceClass parameter for dataPool and metadataPool.

    RADOS Gateway parameters

    Parameter

    Description

    name

    Required. Ceph Object Storage instance name.

    dataPool

    Required if zone:name is not specified. Mutually exclusive with zone. Must be used together with metadataPool.

    Object storage data pool spec that must only contain replicated or erasureCoded and failureDomain parameters. The failureDomain parameter may be set to host, rack, room, or datacenter, defining the failure domain across which the data will be spread. The deviceClass must be explicitly defined. For dataPool, Mirantis recommends using an erasureCoded pool. For details, see Rook documentation: Erasure coding. For example:

    rgw:
      dataPool:
        deviceClass: hdd
        failureDomain: host
        erasureCoded:
          codingChunks: 1
          dataChunks: 2
    

    metadataPool

    Required if zone:name is not specified. Mutually exclusive with zone. Must be used together with dataPool. Object storage metadata pool spec that must only contain replicated and failureDomain parameters. The failureDomain parameter may be set to host, rack, room, or datacenter, defining the failure domain across which the data will be spread. The deviceClass must be explicitly defined. Can use only replicated settings. For example:

    rgw:
      metadataPool:
        deviceClass: hdd
        failureDomain: host
        replicated:
          size: 3
    

    where replicated.size is the number of full copies of data on multiple nodes.

    Warning

    When using the non-recommended Ceph pools replicated.size of less than 3, Ceph OSD removal cannot be performed. The minimal replica size equals a rounded up half of the specified replicated.size.

    For example, if replicated.size is 2, the minimal replica size is 1, and if replicated.size is 3, then the minimal replica size is 2. The replica size of 1 allows Ceph having PGs with only one Ceph OSD in the acting state, which may cause a PG_TOO_DEGRADED health warning that blocks Ceph OSD removal. Mirantis recommends setting replicated.size to 3 for each Ceph pool.

    gateway

    Required. The gateway settings corresponding to the rgw daemon settings. Includes the following parameters:

    • port - the port on which the Ceph RGW service will be listening on HTTP.

    • securePort - the port on which the Ceph RGW service will be listening on HTTPS.

    • instances - the number of pods in the Ceph RGW ReplicaSet. If allNodes is set to true, a DaemonSet is created instead.

      Note

      Mirantis recommends using 3 instances for Ceph Object Storage.

    • allNodes - defines whether to start the Ceph RGW pods as a DaemonSet on all nodes. The instances parameter is ignored if allNodes is set to true.

    For example:

    rgw:
      gateway:
        allNodes: false
        instances: 3
        port: 80
        securePort: 8443
    

    preservePoolsOnDelete

    Optional. Defines whether to delete the data and metadata pools in the rgw section if the object storage is deleted. Set this parameter to true if you need to store data even if the object storage is deleted. However, Mirantis recommends setting this parameter to false.

    objectUsers and buckets

    Optional. To create new Ceph RGW resources, such as buckets or users, specify the following keys. Ceph Controller will automatically create the specified object storage users and buckets in the Ceph cluster.

    • objectUsers - a list of user specifications to create for object storage. Contains the following fields:

      • name - a user name to create.

      • displayName - the Ceph user name to display.

      • capabilities - user capabilities:

        • user - admin capabilities to read/write Ceph Object Store users.

        • bucket - admin capabilities to read/write Ceph Object Store buckets.

        • metadata - admin capabilities to read/write Ceph Object Store metadata.

        • usage - admin capabilities to read/write Ceph Object Store usage.

        • zone - admin capabilities to read/write Ceph Object Store zones.

        The available options are *, read, write, read, write. For details, see Ceph documentation: Add/remove admin capabilities.

      • quotas - user quotas:

        • maxBuckets - the maximum bucket limit for the Ceph user. Integer, for example, 10.

        • maxSize - the maximum size limit of all objects across all the buckets of a user. String size, for example, 10G.

        • maxObjects - the maximum number of objects across all buckets of a user. Integer, for example, 10.

        For example:

        objectUsers:
        - capabilities:
            bucket: '*'
            metadata: read
            user: read
          displayName: test-user
          name: test-user
          quotas:
            maxBuckets: 10
            maxSize: 10G
        
    • buckets - a list of strings that contain bucket names to create for object storage.

    zone

    Required if dataPool and metadataPool are not specified. Mutually exclusive with these parameters. Defines the Ceph Multisite zone where the object storage must be placed. Includes the name parameter that must be set to one of the zones items. For details, see the Enable multisite for Ceph RGW Object Storage procedure depending on the Ceph custom resource being used: MiraCeph or KaaSCephCluster.

    For example:

    objectStorage:
      multisite:
        zones:
        - name: master-zone
        ...
      rgw:
        zone:
          name: master-zone
    

    SSLCert

    Optional. Custom TLS certificate parameters used to access the Ceph RGW endpoint. If not specified, a self-signed certificate will be generated.

    For example:

    objectStorage:
      rgw:
        SSLCert:
          cacert: |
            -----BEGIN CERTIFICATE-----
            ca-certificate here
            -----END CERTIFICATE-----
          tlsCert: |
            -----BEGIN CERTIFICATE-----
            private TLS certificate here
            -----END CERTIFICATE-----
          tlsKey: |
            -----BEGIN RSA PRIVATE KEY-----
            private TLS key here
            -----END RSA PRIVATE KEY-----
    

    SSLCertInRef

    Optional. Available since MOSK 25.1. Flag to determine that a TLS certificate for accessing the Ceph RGW endpoint is used but not exposed in spec. For example:

    objectStorage:
      rgw:
        SSLCertInRef: true
    

    The operator must manually provide TLS configuration using the rgw-ssl-certificate secret in the rook-ceph namespace of the managed cluster. The secret object must have the following structure:

    data:
      cacert: <base64encodedCaCertificate>
      cert: <base64encodedCertificate>
    

    When removing an already existing SSLCert block, no additional actions are required, because this block uses the same rgw-ssl-certificate secret in the rook-ceph namespace.

    When adding a new secret directly without exposing it in spec, the following rules apply:

    • cert - base64 representation of a file with the server TLS key, server TLS cert, and cacert.

    • cacert - base64 representation of a cacert only.

    For example:

    rgw:
      name: rgw-store
      dataPool:
        deviceClass: hdd
        erasureCoded:
          codingChunks: 1
          dataChunks: 2
        failureDomain: host
      metadataPool:
        deviceClass: hdd
        failureDomain: host
        replicated:
          size: 3
      gateway:
        allNodes: false
        instances: 3
        port: 80
        securePort: 8443
      preservePoolsOnDelete: false