Volume configuration

The MOSK Block Storage service (OpenStack Cinder) uses Ceph as the default backend for Cinder Volume. Also, MOSK enables its clients to define their own volume backends using the OpenStackDeployment custom resource. This section provides all the details required to properly configure a custom Cinder Volume backend as a StatefulSet or a DaemonSet.

Disabling the Ceph backend for Cinder Volume

MOSK stores the configuration for the default Ceph backend in the spec:features:cinder:volume structure in the OpenStackDeployment custom resource.

To disable the Ceph backend for Cinder Volume, modify the spec:features:cinder:volume structure as follows:

spec:
  features:
    cinder:
      volume:
        enabled: false
  services:
    block-storage:
      cinder:
        values:
          conf:
            DEFAULT:
              default_volume_type: <NEW-DEFAULT-VOLUME-TYPE-NAME>

When disabling the Ceph backend for Cinder Volume, you must explicitly specify the new default_volume_type parameter. Refer to the sections below to learn how you can configure it.

Considerations for configuring a custom Cinder Volume backend

Before you start deploying your custom Cinder Volume backend, decide on key backend parameters and understand how they affect other services:

Note

Make sure to navigate to the documentation for the specific OpenStack version used to deploy your environment when referring to the official OpenStack documentation.

In addition, you may need to build your own Cinder image as described in Customize OpenStack container images.

Next, review the following key considerations:

Considerations for configuring a custom Cinder Volume backend

Configuration option

Details

StatefulSet or DaemonSet

If the Cinder volume backend you prefer must run on all nodes with a specific label and scale automatically as nodes are added or removed, use a DaemonSet. This type of backend typically requires that its data remains on the same node where its pod is running. A common example of such a backend is the LVM backend.

Otherwise, Mirantis recommends using a StatefulSet, which offers more flexibility than a DaemonSet.

Support for Active/Active High Availability

If the driver does not support Active/Active High Availability, ensure that only a single copy of the backend runs and that the cluster parameter is left empty in the cinder.conf file for this backend.

When deploying the backend using a StatefulSet, set pod.replicas.volume to 1 for this backend configuration. Additionally, enable hostNetwork to ensure that the service endpoint’s IP address remains stable when the backend pod restarts.

Support for Multi-Attach

If the driver supports Multi-Attach, it allows multiple connections to the same volume. This capability is important for certain services, such as Glance. If the driver does not support Multi-Attach, the backend cannot be used for services that require this functionality.

Support for iSCSI and access to the /run directory

Some drivers require access to the /run directory on the host system for storing their PID or lock files. Additionally, they may need access to iSCSI and multipath services on the host. To enable this capability, set the conf:enable_iscsi parameter to true. In some cases, you might also need to run the backend container as privileged.

Privileged access for the container

For security reasons, Mirantis recommends running the Cinder Volume backend container with the minimum required privileges. However, if the drivers require privileged access, you can enable it for the StatefulSet by setting the parameter pod:security_context:cinder_volume:container:cinder_volume:privileged.

Access to the host network namespace

If the driver requires access to the host network namespace, or if you need to ensure that the Cinder Volume backend’s IP address remains unchanged after pod recreation or restart, set hostNetwork to true using the following parameters:

  • For a DaemonSet, use pod:useHostNetwork:volume_daemonset. This parameter is set to true by default.

  • For a StatefulSet, use pod:useHostNetwork:volume. Mirantis recommends avoiding using StatefulSets with hostNetwork as it may cause issues. StatefulSet pods are not tied to a specific node, and multiple pods can run on the same node.

Access to the host IPC namespace

If the driver requires access to the host’s IPC namespace, set hostIPC to true using the following parameters:

  • For a DaemonSet, use pod:useHostIPC:volume_daemonset. For DaemonSet, this parameter is set to true by default.

  • For a StatefulSet, use pod:useHostIPC:volume.

Access to host PID namespace

If the driver requires access to the host’s PID namespace, set hostPID to true using the following parameters:

  • For a DaemonSet, use pod:useHostPID:volume_daemonset.

  • For a StatefulSet, use pod:useHostPID:volume.

Configuring a custom StatefulSet backend

Available since MOSK 24.3 TechPreview

MOSK enables its clients to define volume backends as a StatefulSet.

To configure a custom StatefulSet backend for the MOSK Block Storage service (OpenStack Cinder), use the spec:features:cinder:volume:backends structure in the OpenStackDeployment custom resource:

spec:
  features:
    cinder:
      volume:
        backends:
          <UNIQUE_BACKEND_NAME>:
            enabled: true
            type: statefulset
            create_volume_type: true
            values:
              conf:
              images:
              labels:
              pod:

The enabled and create_volume_type parameters are optional. With create_volume_type set to true (default), the new backend will be added to the Cinder bootstrap job. Once this job is completed, the volume type for the custom backend will be created in OpenStack.

The supported value for type is statefulset.

The list of keys you can override in the values.yaml file of the Cinder chart includes conf, images, labels, and pod.

When you define the custom backend for the Block Storage service, MOSK deploys individual pods for it. These pods have separate Secrets for configuration files and ConfigMaps for scripts.

Example of configuration of a custom StatefulSet backend for Cinder:

The configuration example deploys a StatefulSet for the Cinder volume backend that uses the NFS driver, running a single replica on node labeled kubernetes.io/hostname:service-node. Privilege escalation for the Cinder volume pod is driver-specific.

spec:
  features:
    cinder:
      volume:
        enabled: false
        backends:
          nfs-volume:
            type: statefulset
            values:
              conf:
                cinder:
                  DEFAULT:
                    cluster: ""
                    enabled_backends: volumes-nfs
                  volumes-extra-nfs:
                    nas_host: 1.2.3.4
                    nas_share_path: /cinder_volume
                    nas_secure_file_operations: false
                    nfs_mount_point_base: /tmp/mountpoints
                    nfs_snapshot_support: true
                    volume_backend_name: volumes-nfs
                    volume_driver: cinder.volume.drivers.nfs.NfsDriver
              pod:
                replicas:
                  volume: 1
                security_context:
                  cinder_volume:
                    container:
                      cinder_volume:
                        privileged: true
              labels:
                volume:
                  node_selector_key: kubernetes.io/hostname
                  node_selector_value: service-node
  services:
    block-storage:
      cinder:
        values:
          conf:
            DEFAULT:
              default_volume_type: volumes-nfs

Configuring a custom DaemonSet backend

TechPreview

MOSK enables its clients to define volume backends as a DaemonSet, LVM in particular.

To configure a custom DaemonSet backend for the MOSK Block Storage service (OpenStack Cinder), use the spec:nodes structure in the OpenStackDeployment custom resource:

spec:
  nodes:
    <node label>:
      features:
        cinder:
          volume:
            backends:
              <backend name>:
                lvm:
                  <CINDER-LVM-DRIVER-PARAMETERS>

Example of configuration of a custom DaemonSet backend for Cinder:

The configuration example deploys a DaemonSet for the Cinder volume backend that uses the LVM driver and runs on nodes with the openstack-compute-node=enabled label:

Caution

For data storage, this backend uses the LVM cinder-vol group that must be present on nodes before the new backend is applied. For the procedure on how to deploy an LVM backend, refer to Enable LVM block storage.

spec:
  features:
    cinder:
      volume:
        enabled: false
  nodes:
    openstack-compute-node::enabled:
      features:
        cinder:
          volume:
            backends:
              volumes-lvm:
                lvm:
                  volume_group: "cinder-vol"
  services:
    block-storage:
      cinder:
        values:
          conf:
            DEFAULT:
              default_volume_type: volumes-lvm

Disabling stale volume services cleaning

MOSK provides the cinder-service-cleaner CronJob by default. This CronJob periodically checks whether all Cinder services in OpenStack are up to date and removes any stale ones.

This CronJob is tested only with backends supported by MOSK. If cinder-service-cleaner does not work properly with your custom Cinder volume backend, you can disable it at the OpenStackDeployment service level in the OpenStackDeployment custom resource:

spec:
  services:
    block-storage:
      cinder:
        values:
          manifests:
            cron_service_cleaner: false

Note

Make sure to navigate to the documentation for the specific OpenStack version used to deploy your environment when referring to the official OpenStack documentation.