Cephless cloud

Persistent storage is a key component of any MOSK deployment. Out of the box, MOSK includes an open-source software-defined storage solution (Ceph), which hosts various kinds of cloud application data, such as root and ephemeral disks for virtual machines, virtual machine images, attachable virtual block storage, and object data. In addition, a Ceph cluster usually acts as a storage for the internal MOSK components, such as Kubernetes, OpenStack, StackLight, and so on.

Being distributed and redundant by design, Ceph requires a certain minimum amount of servers, also known as OSD or storage nodes, to work. A production-grade Ceph cluster typically consists of at least nine storage nodes, while a development and test environment may include four to six servers. For details, refer to MOSK cluster hardware requirements.

It is possible to reduce the overall footprint of a MOSK cluster by collocating the Ceph components with hypervisors on the same physical servers; this is also known as hyper-converged design. However, this architecture still may not satisfy the requirements of certain use cases for the cloud.

Standalone telco-edge MOSK clouds typically consist of three to seven servers hosted in a single rack, where every piece of CPU, memory and disk resources is strictly accounted and better be dedicated to the cloud workloads, rather than control plane. For such clouds, where the cluster footprint is more important than the resiliency of the application data storage, it makes sense either not to have a Ceph cluster at all or to replace it with some primitive non-redundant solution.

Enterprise virtualization infrastructure with third-party storage is not a rare strategy among large companies that rely on proprietary storage appliances, provided by NetApp, Dell, HPE, Pure Storage, and other major players in the data storage sector. These industry leaders offer a variety of storage solutions meticulously designed to suit various enterprise demands. Many companies, having already invested substantially in proprietary storage infrastructure, prefer integrating MOSK with their existing storage systems. This approach allows them to leverage this investment rather than incurring new costs and logistical complexities associated with migrating to Ceph.

Architecture

Cephless-architecture

Kind of data

MOSK component

Data storage in Cephless architecture

Configuration

Root and ephemeral disks of instances

Compute service (OpenStack Nova)

  • Compute node local file system (QCOW2 images).

  • Compute node local storage devices (LVM volumes).

    You can select QCOW2 and LVM backend per compute node.

  • Volumes through the “boot from volume” feature of the Compute service.

    You can select the Boot from volume option when spinning up a new instance as a cloud user.

Volumes

Block Storage service (OpenStack Cinder)

  • MOSK standard LVM+iSCSI backend for the Block Storage service. This aligns in a seamless manner with the concept of hyper-converged design, wherein the LVM volumes are collocated on the compute nodes.

  • Third-party storage.

Enable LVM block storage

Volumes backups

Block Storage service (OpenStack Cinder)

  • External NFS share TechPreview

  • External S3 endpoint TechPreview

Alternatively, you can disable the volume backup functionality.

Backup configuration

Tungsten Fabric database backups

Tungsten Fabric (Cassandra, ZooKeeper)

External NFS share TechPreview

Alternatively, you can disable the Tungsten Fabric database backups functionality.

Tungsten Fabric database

OpenStack database backups

OpenStack (MariaDB)

  • External NFS share TechPreview

  • External S3-compatible storage TechPreview

  • Local file system of one of the MOSK controller nodes. By default, database backups are stored on the local file system on the node where the MariaDB service is running. This imposes a risk to cloud security and resiliency. For enterprise environments, it is a common requirement to store all the backup data externally.

Alternatively, you can disable the database backup functionality.

Results of functional testing

OpenStack Tempest

Local file system of MOSK controller nodes.

The openstack-tempest-run-tests job responsible for running the Tempest suite stores the results of its execution in a volume requested through the pvc-tempest PersistentVolumeClaim (PVC). The subject volume can be created by the local volume provisioner on the same Kubernetes worker node, where the job runs. Usually, it is a MOSK controller node.

Run Tempest tests

Instance images and snapshots

Image service (OpenStack Glance)

You can configure the Block Storage service (OpenStack Cinder) to be used as a storage backend for images and snapshots. In this case, each image is represented as a volume.

Important

Representing volumes as images implies a hard requirement for the selected block storage backend to support multi-attach capability that is concurrent reads and writes to and from a single volume.

Enable Cinder backend for Glance

Application object data

Object storage service (Ceph RADOS Gateway)

External S3, Swift, or any other third-party storage solutions compatible with object access protocols.

Note

An external object storage solution will not be integrated into the MOSK identity service (OpenStack Keystone), the cloud applications will need to take care of managing access to their object data themselves.

If no Ceph is deployed as part of a cluster, the MOSK built-in Object Storage service API endpoints are disabled automatically.

Logs, metrics, alerts

Mirantis StackLight (Prometeus, Alertmanager, Patroni, OpenSearch)

Local file system of MOSK controller nodes.

StackLight must be deployed in the HA mode, when all its data gets stored on the local file system of the nodes running StackLight services. In this mode, StackLight components get configured to handle the data replication themselves.

StackLight deployment architecture

Limitations

  • The determination of whether a MOSK cloud will include Ceph or not should take place during its planning and design phase. Once the deployment is complete, reconfiguring the cloud to switch between Ceph and non-Ceph architectures becomes impossible.

  • Mirantis recommends avoiding substitution of Ceph-backed persistent volumes in the MOSK underlying Kubernetes cluster with local volumes (local volume provisioner) for production environments. MOSK does not support such configuration unless the components that rely on these volumes can replicate their data themselves, for example, StackLight. Volumes provided by the local volume provisioner are not redundant, as they are bound to just a single node and can only be mounted from the Kubernetes pods running on the same nodes.