Configure Ceph RGW TLS

Once you enable Ceph RGW as described in Mirantis Container Cloud: Enable Ceph RGW Object Storage, you can configure the Transport Layer Security (TLS) protocol for a Ceph RGW public endpoint using the following options:

  • Using MOSK TLS, if it is enabled and exposes its certificates and domain for Ceph. In this case, Ceph RGW will automatically create an ingress rule with MOSK certificates and domain to access the Ceph RGW public endpoint. Therefore, you only need to reach the Ceph RGW public and internal endpoints and set the CA certificates for a trusted TLS connection.

  • Using custom ingress specified in the KaaSCephCluster CR. In this case, Ceph RGW public endpoint will use the public domain specified using the ingress parameters.


  • Starting from MOSK 21.3, external Ceph RGW service is not supported and will be deleted during update. If your system already uses endpoints of an external RGW service, reconfigure them to the ingress endpoints.

  • When using a custom or OpenStack ingress, configure the DNS name for RGW to look on an external IP address of that ingress. If you do not have an OpenStack or custom ingress, point the DNS to an external LB of RGW.

This section also describes how to specify a custom public endpoint for Ceph Object Storage.

Сonfigure Ceph RGW TLS

  1. Verify whether MOSK TLS is enabled. The spec.features.ssl.public_endpoints section should be specified in the OpenStackDeployment CR.

  2. To generate an SSL certificate for internal usage, verify that the gateway securePort parameter is specified in the KaasCephCluster CR. For details, see Mirantis Container Cloud: Enable Ceph RGW Object Storage.

  3. Select from the following options:

    • If MOSK TLS is enabled, obtain the MOSK CA certificate for a trusted connection:

      kubectl -n openstack-ceph-shared get secret openstack-rgw-creds -o jsonpath="{.data.ca_cert}" | base64 -d
    • Configure Ceph RGW TLS using a custom ingress:


      Starting from MOSK 21.2, the rgw section is deprecated and the ingress parameters are moved under cephClusterSpec.ingress. If you continue using rgw.ingress, it will be automatically translated into cephClusterSpec.ingress during the MOSK managed cluster release update.

      1. Open the KaasCephCluster CR for editing.

      2. Specify the ingress parameters:

        • publicDomain - domain name to use for the external service.

        • cacert - Certificate Authority (CA) certificate, used for the ingress rule TLS support.

        • tlsCert - TLS certificate, used for the ingress rule TLS support.

        • tlsKey - TLS private key, used for the ingress rule TLS support.

        • customIngress Optional. Available since MOSK 21.3 - includes the following custom Ingress Controller parameters:

          • className - the custom Ingress Controller class name. If not specified, the openstack-ingress-nginx class name is used by default.

          • annotations - extra annotations for the ingress proxy. For details, see NGINX Ingress Controller: Annotations.

            By default, the following annotations are set:

            • is set to /

            • is set to <rgwName>.rook-ceph.svc.

              The value for <rgwName> is

            Optional annotations:

            • "off" that disables buffering for ingress to prevent the 413 (Request Entity Too Large) error when uploading large files using RGW.

            • <size> that increases the default uploading size limit to prevent the 413 (Request Entity Too Large) error when uploading large files using RGW. Set the value in MB (m) or KB (k). For example, 100m.

          For example:

            className: openstack-ingress-nginx


          Starting from MOSK 21.3, an ingress rule is by default created with an internal Ceph RGW service endpoint as a back end. Also, rgw dns name is specified in the Ceph configuration and is set to <rgwName>.rook-ceph.svc by default. You can override this option using the spec.cephClusterSpec.rookConfig key-value parameter. In this case, also change the corresponding ingress annotation.

        For example:

                name: rgw-store
              cacert: |
                -----BEGIN CERTIFICATE-----
                -----END CERTIFICATE-----
              tlsCert: |
                -----BEGIN CERTIFICATE-----
                -----END CERTIFICATE-----
              tlsKey: |
                -----BEGIN RSA PRIVATE KEY-----
                -----END RSA PRIVATE KEY-----
              "rgw dns name":


        • For clouds with the publicDomain parameter specified, align the upstream-vhost ingress annotation with the name of the Ceph Object Storage and the specified public domain.

        • Ceph Object Storage requires the upstream-vhost and rgw dns name parameters to be equal. Therefore, override the default rgw dns name to the corresponding ingress annotation value.

  4. To access internal and public Ceph RGW endpoints:

    1. Obtain the Ceph RGW public endpoint:

      kubectl -n rook-ceph get ingress
    2. To use the Ceph RGW internal endpoint with TLS, configure trusted connection for the required CA certificate:

      kubectl -n rook-ceph get secret <rgwCacertSecretName> -o jsonpath="{.data.cacert}" | base64 -d

      Substitute <rgwCacertSecretName> with the following value:

      • Starting from MOSK 21.2, rgw-ssl-certificate

      • Prior to MOSK 21.2, rgw-ssl-local-certificate

    3. Obtain the internal endpoint name for Ceph RGW:

      kubectl -n rook-ceph get svc -l app=rook-ceph-rgw

      The internal endpoint for Ceph RGW has the https://<internal-svc-name>.rook-ceph.svc:<rgw-secure-port>/ format, where <rgw-secure-port> is spec.rgw.gateway.securePort specified in the KaaSCephCluster CR.

Specify a custom public endpoint for Ceph Object Storage

If you need to specify a custom public endpoint with spec.cephClusterSpec.ingress.publicDomain as a domain name, use the following steps to configure access to Ceph Object Storage:

  1. Verify that spec.cephClusterSpec.ingress.publicDomain is specified in the KaaSCephCluster CR . For example, publicDomain is

  2. Obtain Ceph Object Storage name from For example, obj-store.

    kubectl -n <managedClusterProject> get kaascephcluster -o jsonpath="{range .items[*]}{}{"\n"}{end}"

    Substitute <managedClusterProject> with the project of the required managed cluster.

  3. Open the KaasCephCluster CR for editing.

  4. Specify a custom upstream-vhost ingress rule annotation:


    Substitute <customPublicEndpoint> with a public endpoint with the domain specified in spec.cephClusterSpec.ingress.publicDomain. For example,

  5. Verify Ceph Object Storage host names:

    1. Enter the rook-ceph-tools pod:

      kubectl -n rook-ceph -it deployment/rook-ceph-tools -- bash
    2. Obtain Ceph Object Storage default zone group configuration:

      radosgw-admin zonegroup modify --rgw-zonegroup=<objectStorageName> --rgw-zone=<objectStorageName> | tee zonegroup.json

      Substitute <objectStorageName> with the Ceph Object Storage name from

    3. Verify that the hostnames key is a list that contains two endpoints: an internal endpoint and a custom public endpoint:

      "hostnames": ["rook-ceph-rgw-<objectStorageName>.rook-ceph.svc", <customPublicEndpoint>]

      Substitute <objectStorageName> with the Ceph Object Storage name and <customPublicEndpoint> with the public endpoint with a custom public domain.

    4. If one or both endpoints are omitted in the list, add the missing endpoints to the hostnames list in the zonegroup.json file and update Ceph Object Storage zone group configuration:

      radosgw-admin zonegroup set --rgw-zonegroup=<objectStorageName> --rgw-zone=<objectStorageName> --infile zonegroup.json
      radosgw-admin period update --commit
    5. Verify that the hostnames list contains both the internal and custom public endpoint:

      radosgw-admin zonegroup get | jq -r ".hostnames"

      Example of system response:

    6. Exit the rook-ceph-tools pod:


Once done, Ceph Object Storage becomes available by the custom public endpoint with an S3 API client, OpenStack Swift CLI, and OpenStack Horizon Containers plugin.