KaaSCephCluster.status description¶
Warning
This procedure is valid for MOSK clusters that use the unsupported KaaSCephCluster custom resource (CR) instead of the
MiraCeph CR that is available since MOSK 25.2 as a new Ceph configuration
entrypoint.
For the equivalent procedure with the MiraCeph CR, refer to the following
section:
KaaSCephCluster.status allows you to learn the current health of a Ceph
cluster and identify potentially problematic components. This section describes
KaaSCephCluster.status and its fields. To view KaaSCephCluster.status,
perform the steps described in Verify Ceph cluster state through CLI.
Field |
Description |
|---|---|
|
Describes the current state of |
|
Describes the current phase of Ceph spec reconciliation and spec
validation result. The |
|
Reresents a short version of |
|
Contains a complete Ceph cluster information including cluster, Ceph
resources, and daemons health. It helps to reveal the potentially
problematic components. For |
|
Contains information about secrets of the MOSK
cluster that are used in the Ceph cluster, such as keyrings, Ceph
clients, RADOS Gateway user credentials, and so on. For
|
The following tables describe all sections of KaaSCephCluster.status.
Field |
Description |
|---|---|
|
Contains the current phase of handling of the applied Ceph cluster spec.
Can equal to |
|
Contains a detailed description of the current phase or an error message
if the phase is |
|
Contains the validation:
result: Succeed or Failed
messages: ["error", "messages", "list"]
|
Field |
Description |
|---|---|
|
Current Ceph cluster collector status:
|
|
|
|
|
|
List of error or warning messages found when gathering the facts about the Ceph cluster. |
Field |
Description |
|---|---|
|
General information from Rook about the Ceph cluster health and current
state. The clusterStatus:
state: <rook ceph cluster common status>
phase: <rook ceph cluster spec reconcile phase>
message: <rook ceph cluster phase details>
conditions: <history of rook ceph cluster
reconcile steps>
ceph: <ceph cluster health>
storage:
deviceClasses: <list of used device classes
in ceph cluster>
version:
image: <ceph image used in ceph cluster>
version: <ceph version of ceph cluster>
|
|
Status of the Rook Ceph Operator pod that is |
|
Map of statuses for each Ceph cluster daemon type. Indicates the
expected and actual number of Ceph daemons on the cluster. Available
daemon types are: daemonsStatus:
<daemonType>:
status: <daemons status>
running: <number of running daemons with
details>
For example: daemonsStatus:
mgr:
running: a is active mgr ([] standBy)
status: Ok
mon:
running: '3/3 mons running: [a c d] in quorum'
status: Ok
osd:
running: '4/4 running: 4 up, 4 in'
status: Ok
rgw:
running: 2/2 running
([openstack.store.a openstack.store.b])
status: Ok
|
|
State of the Ceph cluster block storage resources. Includes the following fields:
|
|
State of the Ceph cluster object storage resources. Includes the following fields:
|
|
Verbose details of the Ceph cluster state. Includes the cephDetails:
diskUsage:
deviceClass:
<deviceClass>:
# The amount of raw storage consumed by user data (excluding bluestore database).
bytesUsed: "<number>"
# The amount of free space available in the cluster.
bytesAvailable: "<number>"
# The amount of storage capacity managed by the cluster.
bytesTotal: "<number>"
pools:
<poolName>:
# The space allocated for a pool over all OSDs. This includes replication,
# allocation granularity, and erasure-coding overhead. Compression savings
# and object content gaps are also taken into account. BlueStore database
# is not included in this amount.
bytesUsed: "<number>"
# The notional percentage of storage used per pool.
usedPercentage: "<number>"
# Number calculated with the formula: bytesTotal - bytesUsed.
bytesAvailable: "<number>"
# An estimate of the notional amount of data that can be written to this pool.
bytesTotal: "<number>"
|
|
Contains information, similar to the cephCSIPluginDaemonsStatus:
<csiPlugin>:
running: <number of running daemons with details>
status: <csi plugin status>
For example: cephCSIPluginDaemonsStatus:
csi-rbdplugin:
running: 1/3 running
status: Some csi-rbdplugin daemons are not ready
csi-cephfsplugin:
running: 3/3 running
status: Ok
|
Field |
Description |
|---|---|
|
Current state of the secret collector on the Ceph cluster:
|
|
|
|
|
|
List of secrets for Ceph clients and RADOS Gateway users:
For example: lastSecretCheck: "2022-09-05T07:05:35Z"
lastSecretUpdate: "2022-09-05T06:02:00Z"
secretInfo:
clientSecrets:
- name: client.admin
secretName: rook-ceph-admin-keyring
secretNamespace: rook-ceph
state: Ready
|
|
List of error or warning messages, if any, found when collecting information about the Ceph cluster. |