Configure Ceph RGW TLS¶
Once you enable Ceph RGW as described in Mirantis Container Cloud: Enable Ceph RGW Object Storage, you can configure the Transport Layer Security (TLS) protocol for a Ceph RGW public endpoint using the following options:
Using MOSK TLS, if it is enabled and exposes its certificates and domain for Ceph. In this case, Ceph RGW will automatically create an ingress rule with MOSK certificates and domain to access the Ceph RGW public endpoint. Therefore, you only need to reach the Ceph RGW public and internal endpoints and set the CA certificates for a trusted TLS connection.
Using custom
ingress
specified in theKaaSCephCluster
CR. In this case, Ceph RGW public endpoint will use the public domain specified using theingress
parameters.
Caution
Starting from MOSK 21.3, external Ceph RGW service is not supported and will be deleted during update. If your system already uses endpoints of an external RGW service, reconfigure them to the ingress endpoints.
When using a custom or OpenStack ingress, configure the DNS name for RGW to look on an external IP address of that ingress. If you do not have an OpenStack or custom ingress, point the DNS to an external LB of RGW.
This section also describes how to specify a custom public endpoint for Ceph Object Storage.
Сonfigure Ceph RGW TLS¶
Verify whether MOSK TLS is enabled. The
spec.features.ssl.public_endpoints
section should be specified in theOpenStackDeployment
CR.To generate an SSL certificate for internal usage, verify that the gateway
securePort
parameter is specified in theKaasCephCluster
CR. For details, see Mirantis Container Cloud: Enable Ceph RGW Object Storage.Select from the following options:
If MOSK TLS is enabled, obtain the MOSK CA certificate for a trusted connection:
kubectl -n openstack-ceph-shared get secret openstack-rgw-creds -o jsonpath="{.data.ca_cert}" | base64 -d
Configure Ceph RGW TLS using a custom
ingress
:Warning
Starting from MOSK 21.2, the
rgw
section is deprecated and theingress
parameters are moved undercephClusterSpec.ingress
. If you continue usingrgw.ingress
, it will be automatically translated intocephClusterSpec.ingress
during the MOSK managed cluster release update.Open the
KaasCephCluster
CR for editing.Specify the
ingress
parameters:publicDomain
- domain name to use for the external service.cacert
- Certificate Authority (CA) certificate, used for the ingress rule TLS support.tlsCert
- TLS certificate, used for the ingress rule TLS support.tlsKey
- TLS private key, used for the ingress rule TLS support.customIngress
Optional. Available since MOSK 21.3 - includes the following custom Ingress Controller parameters:className
- the custom Ingress Controller class name. If not specified, theopenstack-ingress-nginx
class name is used by default.annotations
- extra annotations for the ingress proxy. For details, see NGINX Ingress Controller: Annotations.By default, the following annotations are set:
nginx.ingress.kubernetes.io/rewrite-target
is set to/
nginx.ingress.kubernetes.io/upstream-vhost
is set to<rgwName>.rook-ceph.svc
.The value for
<rgwName>
isspec.cephClusterSpec.objectStorage.rgw.name
.
Optional annotations:
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
that disables buffering foringress
to prevent the 413 (Request Entity Too Large) error when uploading large files using RGW.nginx.ingress.kubernetes.io/proxy-body-size: <size>
that increases the default uploading size limit to prevent the 413 (Request Entity Too Large) error when uploading large files using RGW. Set the value in MB (m
) or KB (k
). For example,100m
.
For example:
customIngress: className: openstack-ingress-nginx annotations: nginx.ingress.kubernetes.io/rewrite-target: / nginx.ingress.kubernetes.io/upstream-vhost: openstack-store.rook-ceph.svc nginx.ingress.kubernetes.io/proxy-body-size: 100m
Note
Starting from MOSK 21.3, an ingress rule is by default created with an internal Ceph RGW service endpoint as a back end. Also,
rgw dns name
is specified in the Ceph configuration and is set to<rgwName>.rook-ceph.svc
by default. You can override this option using thespec.cephClusterSpec.rookConfig
key-value parameter. In this case, also change the corresponding ingress annotation.
For example:
spec: cephClusterSpec: objectStorage: rgw: name: rgw-store ingress: publicDomain: public.domain.name cacert: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- tlsCert: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- tlsKey: | -----BEGIN RSA PRIVATE KEY----- ... -----END RSA PRIVATE KEY----- customIngress: annotations: "nginx.ingress.kubernetes.io/upstream-vhost": rgw-store.public.domain.name rookConfig: "rgw dns name": rgw-store.public.domain.name
Warning
For clouds with the
publicDomain
parameter specified, align theupstream-vhost
ingress annotation with the name of the Ceph Object Storage and the specified public domain.Ceph Object Storage requires the
upstream-vhost
andrgw dns name
parameters to be equal. Therefore, override the defaultrgw dns name
to the corresponding ingress annotation value.
To access internal and public Ceph RGW endpoints:
Obtain the Ceph RGW public endpoint:
kubectl -n rook-ceph get ingress
To use the Ceph RGW internal endpoint with TLS, configure trusted connection for the required CA certificate:
kubectl -n rook-ceph get secret <rgwCacertSecretName> -o jsonpath="{.data.cacert}" | base64 -d
Substitute
<rgwCacertSecretName>
with the following value:Starting from MOSK 21.2,
rgw-ssl-certificate
Prior to MOSK 21.2,
rgw-ssl-local-certificate
Obtain the internal endpoint name for Ceph RGW:
kubectl -n rook-ceph get svc -l app=rook-ceph-rgw
The internal endpoint for Ceph RGW has the
https://<internal-svc-name>.rook-ceph.svc:<rgw-secure-port>/
format, where<rgw-secure-port>
isspec.rgw.gateway.securePort
specified in theKaaSCephCluster
CR.
Specify a custom public endpoint for Ceph Object Storage¶
If you need to specify a custom public endpoint with
spec.cephClusterSpec.ingress.publicDomain
as a domain name, use
the following steps to configure access to Ceph Object Storage:
Verify that
spec.cephClusterSpec.ingress.publicDomain
is specified in theKaaSCephCluster
CR . For example,publicDomain
ismcc1.cluster1.example.com
.Obtain Ceph Object Storage name from
spec.cephClusterSpec.objectStorage.rgw.name
. For example,obj-store
.kubectl -n <managedClusterProject> get kaascephcluster -o jsonpath="{range .items[*]}{@.spec.cephClusterSpec.objectStorage.rgw.name}{"\n"}{end}"
Substitute
<managedClusterProject>
with the project of the required managed cluster.Open the
KaasCephCluster
CR for editing.Specify a custom
upstream-vhost
ingress rule annotation:spec: cephClusterSpec: ingress: customIngress: annotations: nginx.ingress.kubernetes.io/upstream-vhost: <customPublicEndpoint>
Substitute
<customPublicEndpoint>
with a public endpoint with the domain specified inspec.cephClusterSpec.ingress.publicDomain
. For example,obj-store.mcc1.cluster1.example.com
.Verify Ceph Object Storage host names:
Enter the
rook-ceph-tools
pod:kubectl -n rook-ceph -it deployment/rook-ceph-tools -- bash
Obtain Ceph Object Storage default zone group configuration:
radosgw-admin zonegroup modify --rgw-zonegroup=<objectStorageName> --rgw-zone=<objectStorageName> | tee zonegroup.json
Substitute
<objectStorageName>
with the Ceph Object Storage name fromspec.cephClusterSpec.objectStorage.rgw.name
.Verify that the
hostnames
key is a list that contains two endpoints: an internal endpoint and a custom public endpoint:"hostnames": ["rook-ceph-rgw-<objectStorageName>.rook-ceph.svc", <customPublicEndpoint>]
Substitute
<objectStorageName>
with the Ceph Object Storage name and<customPublicEndpoint>
with the public endpoint with a custom public domain.If one or both endpoints are omitted in the list, add the missing endpoints to the
hostnames
list in thezonegroup.json
file and update Ceph Object Storage zone group configuration:radosgw-admin zonegroup set --rgw-zonegroup=<objectStorageName> --rgw-zone=<objectStorageName> --infile zonegroup.json radosgw-admin period update --commit
Verify that the
hostnames
list contains both the internal and custom public endpoint:radosgw-admin zonegroup get | jq -r ".hostnames"
Example of system response:
[ "rook-ceph-rgw-obj-store.rook-ceph.svc", "obj-store.mcc1.cluster1.example.com" ]
Exit the
rook-ceph-tools
pod:exit
Once done, Ceph Object Storage becomes available by the custom public endpoint with an S3 API client, OpenStack Swift CLI, and OpenStack Horizon Containers plugin.