Cephless cloud¶
Persistent storage is a key component of any MOSK deployment. Out of the box, MOSK includes an open-source software-defined storage solution (Ceph), which hosts various kinds of cloud application data, such as root and ephemeral disks for virtual machines, virtual machine images, attachable virtual block storage, and object data. In addition, a Ceph cluster usually acts as a storage for the internal MOSK components, such as Kubernetes, OpenStack, StackLight, and so on.
Being distributed and redundant by design, Ceph requires a certain minimum amount of servers, also known as OSD or storage nodes, to work. A production-grade Ceph cluster typically consists of at least nine storage nodes, while a development and test environment may include four to six servers. For details, refer to MOSK cluster hardware requirements.
It is possible to reduce the overall footprint of a MOSK cluster by collocating the Ceph components with hypervisors on the same physical servers; this is also known as hyper-converged design. However, this architecture still may not satisfy the requirements of certain use cases for the cloud.
Standalone telco-edge MOSK clouds typically consist of three to seven servers hosted in a single rack, where every piece of CPU, memory and disk resources is strictly accounted and better be dedicated to the cloud workloads, rather than control plane. For such clouds, where the cluster footprint is more important than the resiliency of the application data storage, it makes sense either not to have a Ceph cluster at all or to replace it with some primitive non-redundant solution.
Enterprise virtualization infrastructure with third-party storage is not a rare strategy among large companies that rely on proprietary storage appliances, provided by NetApp, Dell, HPE, Pure Storage, and other major players in the data storage sector. These industry leaders offer a variety of storage solutions meticulously designed to suit various enterprise demands. Many companies, having already invested substantially in proprietary storage infrastructure, prefer integrating MOSK with their existing storage systems. This approach allows them to leverage this investment rather than incurring new costs and logistical complexities associated with migrating to Ceph.
Architecture¶
Kind of data |
MOSK component |
Data storage in Cephless architecture |
Configuration |
---|---|---|---|
Root and ephemeral disks of instances |
Compute service (OpenStack Nova) |
|
|
Volumes |
Block Storage service (OpenStack Cinder) |
|
|
Volumes backups |
Block Storage service (OpenStack Cinder) |
Alternatively, you can disable the volume backup functionality. |
|
Tungsten Fabric database backups |
Tungsten Fabric (Cassandra, ZooKeeper) |
External NFS share TechPreview Alternatively, you can disable the Tungsten Fabric database backups functionality. |
|
OpenStack database backups |
OpenStack (MariaDB) |
Alternatively, you can disable the database backup functionality. |
|
Results of functional testing |
OpenStack Tempest |
Local file system of MOSK controller nodes. The |
|
Instance images and snapshots |
Image service (OpenStack Glance) |
You can configure the Block Storage service (OpenStack Cinder) to be used as a storage backend for images and snapshots. In this case, each image is represented as a volume. Important Representing volumes as images implies a hard requirement for the selected block storage backend to support multi-attach capability that is concurrent reads and writes to and from a single volume. |
|
Application object data |
Object storage service (Ceph RADOS Gateway) |
External S3, Swift, or any other third-party storage solutions compatible with object access protocols. Note An external object storage solution will not be integrated into the MOSK identity service (OpenStack Keystone), the cloud applications will need to take care of managing access to their object data themselves. If no Ceph is deployed as part of a cluster, the MOSK built-in Object Storage service API endpoints are disabled automatically. |
|
Logs, metrics, alerts |
Mirantis StackLight (Prometeus, Alertmanager, Patroni, OpenSearch) |
Local file system of MOSK controller nodes. StackLight must be deployed in the HA mode, when all its data gets stored on the local file system of the nodes running StackLight services. In this mode, StackLight components get configured to handle the data replication themselves. |
Limitations¶
The determination of whether a MOSK cloud will include Ceph or not should take place during its planning and design phase. Once the deployment is complete, reconfiguring the cloud to switch between Ceph and non-Ceph architectures becomes impossible.
Mirantis recommends avoiding substitution of Ceph-backed persistent volumes in the MOSK underlying Kubernetes cluster with local volumes (local volume provisioner) for production environments. MOSK does not support such configuration unless the components that rely on these volumes can replicate their data themselves, for example, StackLight. Volumes provided by the local volume provisioner are not redundant, as they are bound to just a single node and can only be mounted from the Kubernetes pods running on the same nodes.