Deploy a MSR cache with Kubernetes¶
This example guides you through deploying a MSR cache, assuming that you’ve got a MSR deployment up and running.
The MSR cache is going to be deployed as a Kubernetes Deployment, so that Kubernetes automatically takes care of scheduling and restarting the service if something goes wrong.
We’ll manage the cache configuration using a Kubernetes Config Map, and the TLS certificates using Kubernetes secrets. This allows you to manage the configurations securely and independently of the node where the cache is actually running.
Prepare the cache deployment¶
At the end of this exercise you should have the following file structure on your workstation:
├── dtrcache.yaml # Yaml file to deploy cache with a single command
├── config.yaml # The cache configuration file
└── certs
├── cache.cert.pem # The cache public key certificate, including any intermediaries
├── cache.key.pem # The cache private key
└── dtr.cert.pem # MSR CA certificate
Create the MSR Cache certificates¶
The MSR cache will be deployed with a TLS endpoint. For this you will need to generate a TLS ceritificate and key from a certificate authority. The way you expose the MSR Cache will change the SANs required for this certificate.
For example:
If you are deploying the MSR Cache with an Ingress Object you will need to use an external MSR cache address which resolves to your ingress controller as part of your certificate.
If you are exposing the MSR cache through a Kubernetes Cloud Provider then you will need the external Loadbalancer address as part of your certificate.
If you are exposing the MSR Cache through a Node Port or a Host Port you will need to use a node’s FQDN as a SAN in your certificate.
On your workstation, create a directory called certs
. Within it
place the newly created certificate cache.cert.pem
and key
cache.key.pem
for your MSR cache. Also place the certificate
authority (including any intermedite certificate authorities) of the
certificate from your MSR deployment. This could be sourced from the
main MSR deployment using curl.
$ curl -s https://<dtr-fqdn>/ca -o certs/dtr.cert.pem`.
Create the MSR Config¶
The MSR Cache will take its configuration from a file mounted into the container. Below is an example configuration file for the MSR Cache. This yaml should be customised for your environment with the relevant external msr cache, worker node or external loadbalancer FQDN.
With this configuration, the cache fetches image layers from MSR and keeps a local copy for 24 hours. After that, if a user requests that image layer, the cache will fetch it again from MSR.
The cache, by default, is configured to store image data inside its container. Therefore if something goes wrong with the cache service, and Kubernetes deploys a new pod, cached data is not persisted. Data will not be lost as it is still stored in the primary MSR. You can customize the storage parameters, if you want the cached images to be backended by persistent storage.
Note
Kubernetes Peristent Volumes or Persistent Volume Claims would have to be used to provide persistent backend storage capabilities for the cache.
cat > config.yaml <<EOF
version: 0.1
log:
level: info
storage:
delete:
enabled: true
filesystem:
rootdirectory: /var/lib/registry
http:
addr: 0.0.0.0:443
secret: generate-random-secret
host: https://<external-fqdn-dtrcache> # Could be MSR Cache / Loadbalancer / Worker Node external FQDN
tls:
certificate: /certs/cache.cert.pem
key: /certs/cache.key.pem
middleware:
registry:
- name: downstream
options:
blobttl: 24h
upstreams:
- https://<msr-url> # URL of the Main MSR Deployment
cas:
- /certs/msr.cert.pem
EOF
Define Kubernetes Resources¶
The Kubernetes Manifest file to deploy the MSR Cache is independent of how you choose to expose the MSR cache within your environment. The below example has been tested to work on Universal Control Plane 3.1, however it should work on any Kubernetes Cluster 1.8 or higher.
cat > dtrcache.yaml <<EOF
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: dtr-cache
namespace: dtr
spec:
replicas: 1
selector:
matchLabels:
app: dtr-cache
template:
metadata:
labels:
app: dtr-cache
annotations:
seccomp.security.alpha.kubernetes.io/pod: docker/default
spec:
containers:
- name: dtr-cache
image: mirantis/dtr-content-cache:2.8.13
command: ["bin/sh"]
args:
- start.sh
- /config/config.yaml
ports:
- name: https
containerPort: 443
volumeMounts:
- name: dtr-certs
readOnly: true
mountPath: /certs/
- name: dtr-cache-config
readOnly: true
mountPath: /config
volumes:
- name: dtr-certs
secret:
secretName: dtr-certs
- name: dtr-cache-config
configMap:
defaultMode: 0666
name: dtr-cache-config
EOF
Create Kubernetes Resources¶
At this point you should have a file structure on your workstation which looks like this:
├── dtrcache.yaml # Yaml file to deploy cache with a single command
├── config.yaml # The cache configuration file
└── certs
├── cache.cert.pem # The cache public key certificate
├── cache.key.pem # The cache private key
└── dtr.cert.pem # MSR CA certificate
You will also need the kubectl
command line tool configured to talk
to your Kubernetes cluster, either through a Kubernetes Config file or a
Mirantis Kubernetes Engine client bundle.
First we will create a Kubernetes namespace to logically separate all of our MSR cache components.
$ kubectl create namespace dtr
Create the Kubernetes Secrets, containing the MSR cache TLS certificates, and a Kubernetes ConfigMap containing the MSR cache configuration file.
$ kubectl -n dtr create secret generic dtr-certs \
--from-file=certs/dtr.cert.pem \
--from-file=certs/cache.cert.pem \
--from-file=certs/cache.key.pem
$ kubectl -n dtr create configmap dtr-cache-config \
--from-file=config.yaml
Finally create the Kubernetes Deployment.
$ kubectl create -f dtrcache.yaml
You can check if the deployment has been successful by checking the
running pods in your cluster: kubectl -n dtr get pods
If you need to troubleshoot your deployment, you can use
kubectl -n dtr describe pods <pods>
and / or
kubectl -n dtr logs <pods>
.
Exposing the MSR Cache¶
For external access to the MSR cache we need to expose the Cache Pods to the outside world. In Kubernetes there are multiple ways for you to expose a service, dependent on your infrastructure and your environment. For more information, see Publishing services - service types on the Kubernetes docs. It is important though that you are consistent in exposing the cache through the same interface you created a certificate for previously. Otherwise the TLS certificate may not be valid through this alternative interface.
MSR Cache Exposure
You only need to expose your MSR cache through one external interface.
NodePort¶
The first example exposes the MSR cache through NodePort. In this example you would have added a worker node’s FQDN to the TLS Certificate in step 1. Here you will be accessing the MSR cache through an exposed port on a worker node’s FQDN.
cat > dtrcacheservice.yaml <<EOF
apiVersion: v1
kind: Service
metadata:
name: dtr-cache
namespace: dtr
spec:
type: NodePort
ports:
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: dtr-cache
EOF
kubectl create -f dtrcacheservice.yaml
To find out which port the MSR cache has been exposed on, you will need to run:
$ kubectl -n dtr get services
You can test that your MSR cache is externally reachable by using
curl
to hit the API endpoint, using both a worker node’s external
address, and the NodePort.
curl -X GET https://<workernodefqdn>:<nodeport>/v2/_catalog
{"repositories":[]}
Ingress Controller¶
This second example will expose the MSR cache through an ingress object. In this example you will need to create a DNS rule in your environment that will resolve a MSR cache external FQDN address to the address of your ingress controller. You should have also specified the same MSR cache external FQDN address within the MSR cache certificate in step 1.
Note
An ingress controller is a prerequisite for this example. If you have not deployed an ingress controller on your cluster, refer to Layer 7 Routing for MKE. This ingress controller will also need to support SSL passthrough.
cat > dtrcacheservice.yaml <<EOF
kind: Service
apiVersion: v1
metadata:
name: dtr-cache
namespace: dtr
spec:
selector:
app: dtr-cache
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dtr-cache
namespace: dtr
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
spec:
tls:
- hosts:
- <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
rules:
- host: <external-msr-cache-fqdn> # Replace this value with your external MSR Cache address
http:
paths:
- backend:
serviceName: dtr-cache
servicePort: 443
EOF
kubectl create -f dtrcacheservice.yaml
You can test that your MSR cache is externally reachable by using curl to hit the API endpoint. The address should be the one you have defined above in the serivce definition file.
curl -X GET https://external-dtr-cache-fqdn/v2/_catalog
{"repositories":[]}