To ensure success during moving the VCP VMs running in the cloud environment for specific services, take a single VM at a time, stop it, move the disk to another host, and start the VM again on the new host machine. The services running on the VM should remain running during the whole process due to high availability ensured by Keepalived and HAProxy.
To move a VCP node to another host:
To synchronize your deployment model with the new setup, update
the /classes/cluster/<cluster_name>/infra/kvm.yml
file:
salt:
control:
cluster:
internal:
node:
<nodename>:
name: <nodename>
provider: ${_param:infra_kvm_node03_hostname}.${_param:cluster_domain}
# replace 'infra_kvm_node03_hostname' param with the new kvm nodename provider
Apply the salt.control state on the new KVM node:
salt-call state.sls salt.control
Destroy the newly spawned VM on the new KVM node:
virsh list
virsh destroy <nodename><nodenum>.<domainname>
Log in to the KVM node originally hosting the VM.
Stop the VM:
virsh list
virsh destroy <nodename><nodenum>.<domainname>
Move the disk to the new KVM node using, for exmaple, the scp utility, replacing the empty disk spawned by the salt.control state with the correct one:
scp /var/lib/libvirt/images/<nodename><nodenum>.<domainname>/system.qcow2 \
<diff_kvm_nodename>:/var/lib/libvirt/images/<nodename><nodenum>.<domainname>/system.qcow2
Start the VM on the new KVM host:
virsh start <nodename><nodenum>.<domainname>
Verify that the services on the moved VM work correctly.
Log in to the KVM node that was hosting the VM originally and undefine it:
virsh list --all
virsh undefine <nodename><nodenum>.<domainname>