Warm up the Container Cloud cache

TechPreview Available since 2.24.0 and 23.2 for MOSK clusters

This section describes how to speed up deployment and update process of managed clusters, which usually do not have access to the Internet and consume artifacts from a management cluster using the mcc-cache service.

By default, after auto-upgrade of a management cluster, before each managed cluster deployment or update, mcc-cache downloads the required list of images, thus slowing down the process.

Using the CacheWarmupRequest resource, you can predownload (warm up) a list of images included in a given set of Cluster releases into the mcc-cache service only once per release for further usage on all managed clusters.

After a successful cache warm-up, the object of the CacheWarmupRequest resource is automatically deleted from the cluster and cache remains for managed clusters deployment or update until next Container Cloud auto-upgrade of the management cluster.

Caution

If the disk space for cache runs out, the cache for the oldest object is evicted. To avoid running out of space in the cache, verify and adjust its size before each cache warm-up.

Requirements

Cache warm-up requires a lot of disk storage, it may take up to 100% of disk space. Therefore, make sure to have enough space for storing cached objects on each node of the management cluster before creating the CacheWarmupRequest resource. The following example contains minimal required values for the cache size for the management cluster:

Minimal cache size

Cluster release

Minimal value in GiB

Non-MOSK Cluster release

20

MOSK Cluster release with one OpenStack version

50

MOSK Cluster release with an OpenStack version upgrade from victoria to yoga

120

Increase cache size for ‘mcc-cache’

After you calculate the disk size for warming up cache depending on your cluster settings and minimal cache warm-up requirements, configure the size of cache in the Cluster object of your cluster.

In the spec:providerSpec:value:kaas:regionalHelmReleases: section of the management Cluster object, add the following snippet to the mcc-cache entry with the required size value in GiB:

nginx:
  cacheSize: 100
kubectl --kubeconfig <pathToManagementClusterKubeconfig> edit cluster <clusterName>

Configuration example:

spec:
  providerSpec:
    value:
      kaas:
        regionalHelmReleases:
        - name: mcc-cache
          values:
            nginx:
              cacheSize: 100

Note

The cacheSize parameter is set in GiB.

Warm up cache using CLI

After you increase the size of cache on the cluster as described in Increase cache size for ‘mcc-cache’, create the CacheWarmupRequest object in the Kubernetes API.

Caution

For any cluster type, create CacheWarmupRequest objects only on the management cluster.

To warm up cache using CLI:

  1. Identify the latest available Cluster releases to use for deployment of new clusters and update of existing clusters:

    kubectl --kubeconfig <pathToManagementClusterKubeconfig> get kaasreleases -l=kaas.mirantis.com/active="true" -o=json | jq -r '.items[].spec.supportedClusterReleases[] | select(.availableUpgrades | length == 0) | .name'
    

    Example of system response:

    mke-14-0-1-3-6-5
    mosk-15-0-1
    
  2. On the management cluster, create a .yaml file for the CacheWarmupRequest object using the following example:

    apiVersion: kaas.mirantis.com/v1alpha1
    kind: CacheWarmupRequest
    metadata:
      name: example-cluster-name
      namespace: default
    spec:
      clusterReleases:
      - mke-14-0-1
      - mosk-15-0-1
      openstackReleases:
      - yoga
      fetchRequestTimeout: 30m
      clientsPerEndpoint: 2
      openstackOnly: false
    

    In this example:

    • The CacheWarmupRequest object is created for a management cluster named example-cluster-name.

    • The CacheWarmupRequest object is created in the only allowed default Container Cloud project.

    • Two Cluster releases mosk-15-0-1 and mke-14-0-1 will be predownloaded.

    • For mosk-15-0-1, only images related to the OpenStack version Yoga will be predownloaded.

    • Maximum time-out for a single request to download a single artifact is 30 minutes.

    • Two parallel workers will fetch artifacts per each mcc-cache service endpoint.

    • All artifacts will be fetched, not only those related to OpenStack.

    For details about the CacheWarmupRequest object, see CacheWarmupRequest resource.

  3. Apply the object to the cluster:

    kubectl --kubeconfig <pathToManagementKubeconfig> apply -f <pathToFile>
    

    Once done, during deployment and update of managed clusters, Container Cloud uses cached artifacts from the mcc-cache service to facilitate and speed up the procedure.

When a new Container Cloud release becomes available and the management cluster auto-upgrades to a new Container Cloud release, repeat the above steps to predownload a new set of artifacts for managed clusters.