Configure iSCSI¶
Internet Small Computer System Interface (iSCSI) is an IP-based standard that provides block-level access to storage devices. iSCSI receives requests from clients and fulfills them on remote SCSI devices. iSCSI support in MKE enables Kubernetes workloads to consume persistent storage from iSCSI targets.
Note
MKE does not support using iSCSI with Windows clusters.
Note
Challenge-Handshake Authentication Protocol (CHAP) secrets are supported for both iSCSI discovery and session management.
iSCSI components¶
The iSCSI initiator is any client that consumes storage and sends iSCSI
commands. In an MKE cluster, the iSCSI initiator must be installed and
running on any node where Pods can be scheduled. Configuration, target
discovery, logging in, and logging out of a target are performed primarily by
two software components: iscsid
(service) and iscsiadm
(CLI tool).
These two components are typically packaged as part of open-iscsi
on Debian
systems and iscsi-initiator-utils
on RHEL, CentOS, and Fedora systems.
iscsid
is the iSCSI initiator daemon and implements the control path of the iSCSI protocol. It communicates withiscsiadm
and kernel modules.iscsiadm
is a CLI tool that allows discovery, login to iSCSI targets, session management, and access and management of theopen-iscsi
database.
The iSCSI target is any server that shares storage and receives iSCSI commands from an initiator.
Note
iSCSI kernel modules implement the data path. The most common modules used
across Linux distributions are scsi_transport_iscsi.ko
, libiscsi.ko
,
and iscsi_tcp.ko
. These modules need to be loaded on the host for
proper functioning of the iSCSI initiator.
Prerequisites¶
Complete hardware and software configuration of the iSCSI storage provider. There is no significant demand for RAM and disk when running external provisioners in MKE clusters. For setup information specific to a storage vendor, refer to the vendor documentation.
Configure kubectl on your clients.
Make sure that the iSCSI server is accessible to MKE worker nodes.
Configure an iSCSI target¶
An iSCSI target can run on dedicated, stand-alone hardware, or can be configured in a hyper-converged manner to run alongside container workloads on MKE nodes. To provide access to the storage device, configure each target with one or more logical unit numbers (LUNs).
iSCSI targets are specific to the storage vendor. Refer to the vendor documentation for setup instructions, including applicable RAM and disk space requirements, and expose them to the MKE cluster.
To expose iSCSI targets to the MKE cluster:
If necessary for access control, configure the target with client iSCSI qualified names (IQNs).
CHAP secrets for authentication.
Make sure that each iSCSI LUN is accessible by all nodes in the cluster. Configure the iSCSI service to expose storage as an iSCSI LUN to all nodes in the cluster. You can do this by allowing all MKE nodes, and along with them the IQNs, to join the target ACL list.
Configure a generic iSCSI initiator¶
Every Linux distribution packages the iSCSI initiator software in a particular way. Follow the instructions specific to the storage provider, using the following steps as a guideline.
Prepare all MKE nodes by installing OS-specific iSCSI packages and loading the necessary iSCSI kernel modules. In the following example,
scsi_transport_iscsi.ko
andlibiscsi.ko
are pre-loaded by the Linux distribution. Theiscsi_tcp
kernel module must be loaded with a separate command.For CentOS or Red Hat:
sudo yum install -y iscsi-initiator-utils sudo modprobe iscsi_tcp
For Ubuntu:
sudo apt install open-iscsi sudo modprobe iscsi_tcp
Set up MKE nodes as iSCSI initiators. Configure initiator names for each node, using the format
InitiatorName=iqn.<YYYY-MM.reverse.domain.name:OptionalIdentifier>
:sudo sh -c 'echo "InitiatorName=iqn.<YYYY-MM.reverse.domain.name:OptionalIdentifier>" > /etc/iscsi/ <initiatorname>.iscsi sudo systemctl restart iscsid
Configure MKE¶
Update the MKE configuration file with the following options:
Configure
--storage-iscsi=true
to enable iSCSI-based PersistentVolumes (PVs) in Kubernetes.Configure
--iscsiadm-path=<path>
to specify the absolute path of theiscsiadm
binary on the host. The default value is/usr/sbin/iscsiad
.Configure
--iscsidb-path=<path>
to specify the path of the iSCSI database on the host. The default value is/etc/iscsi
.
Configure in-tree iSCSI volumes¶
The Kubernetes in-tree iSCSI plugin only supports static provisioning, for which you must:
Verify that the desired iSCSI LUNs are pre-provisioned in the iSCSI targets.
Create iSCSI PV objects, which correspond to the pre-provisioned LUNs with the appropriate iSCSI configuration. As PersistentVolumeClaims (PVCs) are created to consume storage, the iSCSI PVs bind to the PVCs and satisfy the request for persistent storage.
To configure in-tree iSCSI volumes:
Create a YAML file for the
PersistentVolume
object based on the following example:apiVersion: v1 kind: PersistentVolume metadata: name: iscsi-pv spec: capacity: storage: 12Gi accessModes: - ReadWriteOnce iscsi: targetPortal: 192.0.2.100:3260 iqn: iqn.2017-10.local.example.server:disk1 lun: 0 fsType: 'ext4' readOnly: false
Make the following changes using information appropriate for your environment:
Replace
12Gi
with the size of the storage available.Replace
192.0.2.100:3260
with the IP address and port number of the iSCSI target in your environment. Refer to the storage provider documentation for port information.Replace
iqn.2017-10.local.example.server:disk1
with a unique name for the identifier. More than oneiqn
can be specified, but it must use the formatiqn.YYYY-MM.reverse.domain.name:OptionalIdentifier
.iqn.2017-10.local.example.server:disk1
is the IQN of the iSCSI initiator, which in this case is the MKE worker node. Each MKE worker must have a unique IQN.
Create the
PersistentVolume
:kubectl create -f pv-iscsi.yml
Expected output:
persistentvolume/iscsi-pv created
External provisioner and Kubernetes objects¶
An external provisioner is a piece of software running out of process from Kubernetes that is responsible for creating and deleting PVs. External provisioners monitor the Kubernetes API server for PV claims and create PVs accordingly.
When using an external provisioner, you must perform the following additional steps:
Configure external provisioning based on your storage provider. Refer to your storage provider documentation for deployment information.
Define storage classes. Refer to your storage provider dynamic provisioning documentation for configuration information.
Define a PVC and a Pod. When you define a PVC to use the storage class, a PV is created and bound.
Start a Pod using the PVC that you defined.
Note
In some cases, on-premises storage providers use external provisioners to connect PV provisioning to the backend storage.
Troubleshooting¶
The following issues occur frequently in iSCSI integrations:
The host might not have iSCSI kernel modules loaded. To avoid this, always prepare your MKE worker nodes by installing the iSCSI packages and the iSCSI kernel modules prior to installing MKE. If worker nodes are not prepared correctly prior to an MKE installation:
Prepare the nodes.
Restart the
ucp-kubelet
container for changes to take effect.
Some hosts have
depmod
confusion. On some Linux distributions, the kernel modules cannot be loaded until the kernel sources are installed anddepmod
is run. If you experience problems with loading kernel modules, verify that you are runningdepmod
after performing the kernel module installation.
Example¶
Create a YAML file with the following
StorageClass
object:kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: iscsi-targetd-vg-targetd provisioner: iscsi-targetd parameters: targetPortal: 172.31.8.88 iqn: iqn.2019-01.org.iscsi.docker:targetd iscsiInterface: default volumeGroup: vg-targetd initiators: iqn.2019-01.com.example:node1, iqn.2019-01.com.example:node2 chapAuthDiscovery: "false" chapAuthSession: "false"
Apply the
StorageClass
YAML file:kubectl apply -f iscsi-storageclass.yaml
Expected output:
storageclass "iscsi-targetd-vg-targetd" created
Verify the successful creation of the
StorageClass
object:kubectl get sc
Example output:
NAME PROVISIONER AGE iscsi-targetd-vg-targetd iscsi-targetd 30s
Create a YAML file with the following
PersistentVolumeClaim
object:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: iscsi-claim spec: storageClassName: "iscsi-targetd-vg-targetd" accessModes: - ReadWriteOnce resources: requests: storage: 100Mi
The valid
accessModes
values for iSCSI areReadWriteOnce
andReadOnlyMany
.Change the value of
storage
as required.
Note
The scheduler automatically ensures that Pods with the same PVC run on the same worker node.
Apply the
PersistentVolumeClaim
YAML file:kubectl apply -f pvc-iscsi.yml
Expected output:
persistentvolumeclaim "iscsi-claim" created
Verify the successful creation of the
PersistentVolume
andPersistentVolumeClaim
and that thePersistentVolumeClaim
is bound to the correct volume:kubectl get pv,pvc
Example output:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE iscsi-claim Bound pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO iscsi-targetd-vg-targetd 1m NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO Delete Bound default/iscsi- claim iscsi-targetd-vg-targetd 36s
Configure Pods to use the
PersistentVolumeClaim
when binding to thePersistentVolume
.Create a YAML file with the following
ReplicationController
object. TheReplicationController
is used to set up two replica Pods running web servers that use thePersistentVolumeClaim
to mount thePersistentVolume
onto a mountpath containing shared resources.apiVersion: v1 kind: ReplicationController metadata: name: rc-iscsi-test spec: replicas: 2 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: iscsi mountPath: "/usr/share/nginx/html" volumes: - name: iscsi persistentVolumeClaim: claimName: iscsi-claim
Create the
ReplicationController
object:kubectl create -f rc-iscsi.yml
Expected output:
replicationcontroller "rc-iscsi-test" created
Verify successful creation of the Pods:
kubectl get pods
Example output:
NAME READY STATUS RESTARTS AGE rc-iscsi-test-05kdr 1/1 Running 0 9m rc-iscsi-test-wv4p5 1/1 Running 0 9m
See also
Refer to iSCSI-targetd provisioner for detailed information on an external provisioner implementation using a target-based external provisioner.