Deploy an MSR cache with Kubernetes¶
Note
The MSR with Kubernetes deployment detailed herein assumes that you have an MSR deployment up and running.
Deploying the MSR cache as a Kubernetes deployment ensures that Kubernetes automatically handles scheduling and restarting the service in the event that something goes wrong.
You will manage the cache configuration using a Kubernetes Config Map, and the TLS certificates using Kubernetes secrets. This setup allows you to manage the configurations securely and independently of the Node on which the cache is actually running.
Prepare the cache deployment¶
At the end of the cache preparation phase you should have the following file structure on your workstation:
├── msrcache.yaml # The YAML file used to deploy the cache with a single command
├── config.yaml # The cache configuration file
└── certs
├── cache.cert.pem # The cache public key certificate, including any intermediaries
├── cache.key.pem # The cache private key
└── msr.cert.pem # The MSR CA certificate
Create the MSR cache certificates¶
You can deploy the MSR cache with a TLS endpoint, for which it is necessary to generate a TLS ceritificate and key from a certificate authority.
The manner in which you expose the MSR cache changes the SANs that are required for the certificate. For example:
To deploy the MSR cache with an ingress object you must use an external MSR cache address that resolves to your ingress controller as part of your certificate.
To expose the MSR cache through a Kubernetes Cloud Provider, you must have the external Loadbalancer address as part of your certificate.
To expose the MSR cache through a Node port or a host port you must use a Node FQDN (Fully Qualified Domain Name) as a SAN in your certificate.
Create the MSR cache certficates:
Create a cache certificate:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com" -m PEM
On your workstation, create a directory called
certs
.In the
certs
directory, place the newly created certificatecache.cert.pem
and keycache.key.pem
for your MSR cache.Place the certificate authority in the directory, including any intermedite certificate authorities of the certificate from your MSR deployment. If your MSR deployment uses cert-manager, you can source this from the main MSR deployment using kubectl.
kubectl get secret msr-nginx-ca-cert -o go-template='{{ index .data "ca.crt" | base64decode }}'
Note
If cert-manager is not in use, you must instead provide your custom
nginx.webtls
certificate.
Create the MSR Config¶
The MSR cache takes its configuration from a configuration file that is mounted into the container.
Below is an example configuration file for the MSR cache. This YAML file should be customized for your environment with the relevant external MSR cache, worker node, or external loadbalancer FQDN. With this configuration, the cache fetches image layers from MSR and keeps a local copy for 24 hours. After that, if a user requests that image layer, the cache fetches it again from MSR.
The cache, by default, is configured to store image data inside its container. Thus, if something goes wrong with the cache service and Kubernetes deploys a new Pod, cached data is not persisted. Data will not be lost, as it is still stored in the primary MSR. You can customize the storage parameters, if you want the cached images to be back-ended by persistent storage.
Note
Kubernetes persistent volumes or persistent volume claims must be in use to provide persistent back end storage capabilities for the cache.
cat > config.yaml <<EOF
version: 0.1
log:
level: info
storage:
delete:
enabled: true
filesystem:
rootdirectory: /var/lib/registry
http:
addr: 0.0.0.0:443
secret: generate-random-secret
host: https://<external-fqdn-msrcache> # Could be MSR Cache / Loadbalancer / Worker Node external FQDN
tls:
certificate: /certs/cache.cert.pem
key: /certs/cache.key.pem
middleware:
registry:
- name: downstream
options:
blobttl: 24h
upstreams:
- https://<msr-url> # URL of the Main MSR Deployment
cas:
- /certs/msr.cert.pem
EOF
Define Kubernetes resources¶
The Kubernetes manifest file you use to deploy the MSR cache is independent from how you choose to expose the MSR cache within your environment. The following example should work on any Kubernetes cluster 1.8 or higher.
cat > msrcache.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: msr-cache
namespace: msr
spec:
replicas: 1
selector:
matchLabels:
app: msr-cache
template:
metadata:
labels:
app: msr-cache
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
containers:
- name: msr-cache
image: registry.mirantis.com/msr/msr-content-cache:3.0.6
command: ["bin/sh"]
args:
- start.sh
- /config/config.yaml
ports:
- name: https
containerPort: 443
volumeMounts:
- name: msr-certs
readOnly: true
mountPath: /certs/
- name: msr-cache-config
readOnly: true
mountPath: /config
volumes:
- name: msr-certs
secret:
secretName: msr-certs
- name: msr-cache-config
configMap:
defaultMode: 0666
name: msr-cache-config
EOF
Create Kubernetes resources¶
At this point you should have the following file structure on your workstation:
├── msrcache.yaml # The YAML file used to deploy the cache with a single command
├── config.yaml # The cache configuration file
└── certs
├── cache.cert.pem # The cache public key certificate
├── cache.key.pem # The cache private key
└── msr.cert.pem # The MSR CA certificate
In addition, you must have the kubectl
command line tool configured
to communicate with your Kubernetes cluster, either through a Kubernetes config
file or an MKE client bundle.
Create a Kubernetes namespace to logically separate all of the MSR cache components.
kubectl create namespace msr
Create the Kubernetes Secrets that contain the MSR cache TLS certificates, and a Kubernetes ConfigMap that contains the MSR cache configuration file.
$ kubectl -n msr create secret generic msr-certs \ --from-file=certs/msr.cert.pem \ --from-file=certs/cache.cert.pem \ --from-file=certs/cache.key.pem $ kubectl -n msr create configmap msr-cache-config \ --from-file=config.yaml
Create the Kubernetes deployment.
kubectl create -f msrcache.yaml
Confirm successful deployment by reviewing the running Pods in your cluster:
``kubectl -n msr get pods``
Optional. To troubleshoot your deployment:
``kubectl -n msr describe pods <pods>`` and / or
``kubectl -n msr logs <pods>``.
Expose the MSR Cache¶
For external access to the MSR cache you must expose the cache Pods to the outside world. There are multiple ways for you to expose a service with Kubernetes, depending on your infrastructure and your environment. For more information, refer to the official Kubernetes documentation Publishing services - service types.
Note
You must expose the cache through the same interface for which you previously created a certificate, otherwise the TLS certificate may not be valid through the alternative interface.
MSR Cache Exposure
Expose your MSR cache through only one external interface.
NodePort¶
In the NodePort
scenario, a worker Node FQDN is added to the TLS
certificate at the start, and you access the MSR cache through an exposed port
on a worker node FQDN.
cat > msrcacheservice.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: msr-cache
namespace: msr
spec:
type: NodePort
ports:
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: msr-cache
EOF
kubectl create -f msrcacheservice.yaml
To determine on which port the MSR cache is exposed you must run:
kubectl -n msr get services
You can test the external reachability of your MSR cache by using curl
to
hit the API endpoint, using both the external address of a worker node and the
NodePort.
curl -X GET https://<workernodefqdn>:<nodeport>/v2/_catalog
{"repositories":[]}
Ingress Controller¶
In the ingress contoller scenario, you expose the MSR cache through an ingress object. Here, you must create a DNS rule in your environment to resolve an MSR cache external FQDN address to the address of your ingress controller. Also, you should specify the same MSR cache external FQDN within the MSR cache certificate at the start.
Note
An ingress controller is a prerequisite for this example. If you have not deployed an ingress controller on your cluster, refer to Layer 7 Routing for MKE. In addition, the ingress controller must support SSL passthrough.
cat > msrcacheingress.yaml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: msr-cache
namespace: msr
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
tls:
- hosts:
- <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
rules:
- host: <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
http:
paths:
- pathType: Prefix
path: "/cache"
backend:
service:
name: msr-cache
port:
number: 443
EOF
kubectl create -f msrcacheingress.yaml
You can test the external reachability of your MSR cache by using curl
to hit the API endpoint. The address should be the one you have previously defined in the serivce definition file.
curl -X GET https://external-msr-cache-fqdn/v2/_catalog
{"repositories":[]}