Internet Small Computer System Interface (iSCSI) is an IP-based standard that provides block-level access to storage devices. iSCSI takes requests from clients and fulfills these requests on remote SCSI devices. iSCSI support in MKE enables Kubernetes workloads to consume persistent storage from iSCSI targets.
The iSCSI Initiator is any client that consumes storage and sends iSCSI
commands. In a MKE cluster, the iSCSI initiator must be installed and
running on any node where pods can be scheduled. Configuration, target
discovery, and login/logout to a target are primarily performed by two
software components: iscsid
(service) and iscsiadm
(CLI tool).
These two components are typically packaged as part of open-iscsi
on Debian
systems and iscsi-initiator-utils
on RHEL/Centos/Fedora systems.
iscsid
is the iSCSI initiator daemon and implements the control pathof the iSCSI protocol. It communicates with iscsiadm
and kernel
modules.
iscsiadm
is a CLI tool that allows discovery, login to iSCSI targets,session management, and access and management of the open-iscsi
database.
The iSCSI Target is any server that shares storage and receives iSCSI commands from an initiator.
Note
iSCSI kernel modules implement the data path. The most common modules used
across Linux distributions are scsi_transport_iscsi.ko
,``libiscsi.ko``
and iscsi_tcp.ko
. These modules need to be loaded on the host for
proper functioning of the iSCSI initiator.
The following steps are required for configuring iSCSI in Kubernetes via MKE:
An iSCSI target can run on dedicated/stand-alone hardware, or can be configured in a hyper-converged manner to run alongside container workloads on MKE nodes. To provide access to the storage device, each target is configured with one or more logical unit numbers (LUNs).
iSCSI targets are specific to the storage vendor. Refer to the documentation of the vendor for set up instructions, including applicable RAM and disk space requirements, and expose them to the MKE cluster.
Exposing iSCSI targets to the MKE cluster involves the following steps:
Every Linux distribution packages the iSCSI initiator software in a particular way. Follow the instructions specific to the storage provider, using the following steps as a guideline.
First, prepare all MKE nodes by installing OS-specific iSCSI packages and
loading the necessary iSCSI kernel modules. In the following example,
scsi_transport_iscsi.ko
and libiscsi.ko
are pre-loaded by the
Linux distro. The iscsi_tcp
kernel module must be loaded with a
separate command.
For CentOS/Red Hat systems:
sudo yum install -y iscsi-initiator-utils sudo modprobe iscsi_tcp
For Ubuntu systems:
sudo apt install open-iscsi sudo modprobe iscsi_tcp
Next, set up MKE nodes as iSCSI initiators. Configure initiator names for each node as follows:
sudo sh -c 'echo "InitiatorName=iqn.<2019-01.com.example>:<uniqueID>" >
/etc/iscsi/ <initiatorname>.iscsi sudo systemctl restart iscsid
The iqn
must be in the following format:
iqn.YYYY-MM.reverse.domain.name:OptionalIdentifier
Update the MKE configuration file with the following options:
--storage-iscsi=true
to enable iSCSI based PVs in Kubernetes.--iscsiadm-path=<path>
to specify the absolute path of the
iscsiadm binary on the host. The default value is ‘’/usr/sbin/iscsiad’‘.--iscsidb-path=<path>
to specify the path of the iSCSI
database on the host. The default value is “/etc/iscsi”.The Kubernetes in-tree iSCSI plugin only supports static provisioning. For static provisioning:
To configure and create a PersistentVolume
object:
Create a YAML file for the PersistentVolume
object:
apiVersion: v1
kind: PersistentVolume
metadata:
name: iscsi-pv
spec:
capacity:
storage: 12Gi
accessModes:
- ReadWriteOnce
iscsi:
targetPortal: 192.0.2.100:3260
iqn: iqn.2017-10.local.example.server:disk1
lun: 0
fsType: 'ext4'
readOnly: false
Replace the following values with information appropriate for your environment:
12Gi
with the size of the storage available.192.0.2.100:3260
with the IP address and port number of the iSCSI
target in your environment. Refer to the storage provider documentation
for port information.iqn.2017-10.local.example.server:disk1
is the IQN of the iSCSI
initiator, which in this case is the MKE worker node. Each MKE worker
should have a unique IQN. Replace
iqn.2017-10.local.example.server:disk1
with a unique name for the
identifier. More than one iqn
can be specified, but must be the
following format: iqn.YYYY-MM.reverse.domain.name:OptionalIdentifier
.Create the PersistentVolume
using your YAML file by running the
following command on the master node:
kubectl create -f pv-iscsi.yml persistentvolume/iscsi-pv created
An external provisioner is a piece of software running out of process from Kubernetes that is responsible for creating and deleting PVs. External provisioners monitor the Kubernetes API server for PV claims and create PVs accordingly.
When using an external provisioner, you must perform the following additional steps:
Note
Some on-premises storage providers have external provisioners for PV provisioning to backend storage.
CHAP secrets are supported for both iSCSI discovery and session management.
Frequently encountered issues are highlighted in the following list:
depmod
confusion. On some Linux distros, the
kernel modules cannot be loaded until the kernel sources are
installed and depmod
is run. If you experience problems with
loading kernel modules, make sure you run depmod
after kernel
module installation.See iSCSI-targetd provisioner for a reference external provisioner implementation using a target-based external provisioner.
On your client machine with kubectl
installed and the
configuration specifying the IP address of a master node, perform the
following steps:
Create a StorageClass
object in a YAML file named
`iscsi-storageclass.yaml, as shown in the following example:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: iscsi-targetd-vg-targetd
provisioner: iscsi-targetd
parameters:
targetPortal: 172.31.8.88
iqn: iqn.2019-01.org.iscsi.docker:targetd
iscsiInterface: default
volumeGroup: vg-targetd
initiators: iqn.2019-01.com.example:node1, iqn.2019-01.com.example:node2
chapAuthDiscovery: "false"
chapAuthSession: "false"
Use the StorageClass
YAML file and run the following command:
$ kubectl apply -f iscsi-storageclass.yaml
storageclass "iscsi-targetd-vg-targetd" created
$ kubectl get sc
NAME PROVISIONER AGE
iscsi-targetd-vg-targetd iscsi-targetd 30s
Create a PersistentVolumeClaim
object in a YAML file named pvc-iscsi.yml
on the master node, open it in an editor, and include the following
content:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: iscsi-claim
spec:
storageClassName: "iscsi-targetd-vg-targetd"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
Supported accessModes
values for iSCSI include
ReadWriteOnce
and ReadOnlyMany
. You can also change the
requested storage size by changing the storage
value to a
different value.
Note that the scheduler automatically ensures that pods with the same PVC run on the same worker node.
Apply the PersistentVolumeClaim
YAML file by running the
following command on the master node:
kubectl apply -f pvc-iscsi.yml -n $NS
persistentvolumeclaim "iscsi-claim" created
Verify that the PersistentVolume
and PersistentVolumeClaim
were
created successfully and that the PersistentVolumeClaim
is
bound to the correct volume:
$ kubectl get pv,pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
iscsi-claim Bound pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO iscsi-targetd-vg-targetd 1m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-b9560992-24df-11e9-9f09-0242ac11000e 100Mi RWO Delete Bound default/iscsi- claim iscsi-targetd-vg-targetd 36s
Set up pods to use the PersistentVolumeClaim
when binding to
thePersistentVolume
. Here a ReplicationController
is
created and used to set up two replica pods running web servers
that use the PersistentVolumeClaim
to mount the
PersistentVolume
onto a mountpath containing shared resources.
Create a ReplicationController object in a YAML file named
rc-iscsi.yml
and open it in an editor to include the following content:
apiVersion: v1 kind: ReplicationController metadata: name: rc-iscsi-test spec: replicas: 2 selector: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - name: nginx containerPort: 80 volumeMounts: - name: iscsi mountPath: "/usr/share/nginx/html" volumes: - name: iscsi persistentVolumeClaim: claimName: iscsi-claim
Use the ReplicationController YAML file and run the following command on the master node:
$ kubectl create -f rc-iscsi.yml
replicationcontroller "rc-iscsi-test" created
Verify that the pods were created:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rc-iscsi-test-05kdr 1/1 Running 0 9m
rc-iscsi-test-wv4p5 1/1 Running 0 9m