Add a Ceph cluster using web UI¶
Warning
Mirantis highly recommends adding a Ceph cluster using the CLI instead of the web UI. For the CLI procedure, refer to Add a Ceph cluster using CLI.
The web UI capabilities for adding a Ceph cluster are limited and lack flexibility in defining Ceph cluster specifications. For example, if an error occurs while adding a Ceph cluster using the web UI, usually you can address it only through the CLI.
The web UI functionality for managing Ceph cluster is going to be deprecated in one of the following releases.
This section explains how to create a Ceph cluster on top of a managed cluster using the Mirantis Container Cloud web UI. As a result, you will deploy a Ceph cluster with minimum three Ceph nodes that provide persistent volumes to the Kubernetes workloads for your managed cluster.
Note
For the advanced configuration through the KaaSCephCluster
custom
resource, see Ceph advanced configuration.
For the configuration of the Ceph Controller through Kubernetes templates to manage Ceph node resources, see Enable Ceph tolerations and resources management.
To create a Ceph cluster in the managed cluster:
Log in to the Container Cloud web UI with the
m:kaas:namespace@operator
orm:kaas:namespace@writer
permissions.Switch to the required project using the Switch Project action icon located on top of the main left-side navigation panel.
In the Clusters tab, click the required cluster name. The Cluster page with the Machines and Ceph clusters lists opens.
In the Ceph Clusters block, click Create Cluster.
Configure the Ceph cluster in the Create New Ceph Cluster wizard that opens:
¶ Section
Parameter name
Description
General settings
Name
The Ceph cluster name.
Cluster Network
Replication network for Ceph OSDs. Must contain the CIDR definition and match the corresponding values of the cluster
Subnet
object or the environment network values. For configuration examples, see the descriptions ofmanaged-ns_Subnet_storage
YAML files in :ref: e2example1.Public Network
Public network for Ceph data. Must contain the CIDR definition and match the corresponding values of the cluster
Subnet
object or the environment network values. For configuration examples, see the descriptions ofmanaged-ns_Subnet_storage
YAML files in :ref: e2example1.Enable OSDs LCM
Select to enable LCM for Ceph OSDs.
Machines / Machine #1-3
Select machine
Select the name of the Kubernetes machine that will host the corresponding Ceph node in the Ceph cluster.
Manager, Monitor
Select the required Ceph services to install on the Ceph node.
Devices
Select the disk that Ceph will use.
Warning
Do not select the device for system services, for example,
sda
.Warning
A Ceph cluster does not support removable devices that are hosts with hotplug functionality enabled. To use devices as Ceph OSD data devices, make them non-removable or disable the hotplug functionality in the BIOS settings for disks that are configured to be used as Ceph OSD data devices.
Enable Object Storage
Select to enable the single-instance RGW Object Storage.
To add more Ceph nodes to the new Ceph cluster, click + next to any Ceph Machine title in the Machines tab. Configure a Ceph node as required.
Warning
Do not add more than 3
Manager
and/orMonitor
services to the Ceph cluster.After you add and configure all nodes in your Ceph cluster, click Create.
Verify your Ceph cluster as described in Verify Ceph.
Verify that network addresses used on your clusters do not overlap with the following default MKE network addresses for Swarm and MCR:
10.0.0.0/16
is used for Swarm networks. IP addresses from this network are virtual.10.99.0.0/16
is used for MCR networks. IP addresses from this network are allocated on hosts.
Verification of Swarm and MCR network addresses
To verify Swarm and MCR network addresses, run on any master node:
docker info
Example of system response:
Server: ... Swarm: ... Default Address Pool: 10.0.0.0/16 SubnetSize: 24 ... Default Address Pools: Base: 10.99.0.0/16, Size: 20 ...
Not all of Swarm and MCR addresses are usually in use. One Swarm Ingress network is created by default and occupies the
10.0.0.0/24
address block. Also, three MCR networks are created by default and occupy three address blocks:10.99.0.0/20
,10.99.16.0/20
,10.99.32.0/20
.To verify the actual networks state and addresses in use, run:
docker network ls docker network inspect <networkName>