After you provision physical nodes as described in Provision physical nodes using MAAS, follow the instruction below to deploy physical nodes intended for an OpenStack-based MCP cluster. If you plan to deploy a Kubernetes-based MCP cluster, proceed with steps 1-2 of the Kubernetes Prerequisites procedure.
Caution
To avoid the lack of memory for the network driver and ensure its
proper operation, specify the minimum reserved kernel memory in your Reclass
model on the cluster level for a particular hardware node. For example, use
/cluster/<cluster_name>/openstack/compute/init.yml
for the OpenStack
compute nodes and /cluster/<cluster_name>/infra/kvm.yml
for the KVM
nodes.
linux:
system:
kernel:
sysctl:
vm.min_free_kbytes: <min_reserved_memory>
Set the vm.min_free_kbytes
value to 4194304
for a node with more
than 96 GB of RAM. Otherwise, set not more than 5% of the total RAM on the
node.
Note
To change the default kernel version, perform the steps described in Manage kernel version.
To deploy physical servers:
Log in to the Salt Master node.
Verify that the cfg01
key has been added to Salt and your host FQDN
is shown properly in the Accepted Keys
field in the output of the
following command:
salt-key
Verify that all pillars and Salt data are refreshed:
salt "*" saltutil.refresh_pillar
salt "*" saltutil.sync_all
Verify that the Reclass model is configured correctly. The following command output should show top states for all nodes:
python -m reclass.cli --inventory
To verify that the rebooting of the nodes, which will be performed further, is successful, create the trigger file:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway or I@ceph:osd' \
cmd.run "touch /run/is_rebooted"
To prepare physical nodes for VCP deployment, apply the basic Salt states for setting up network interfaces and SSH access. Nodes will be rebooted.
Warning
If you use kvm01
as a Foundation node, the execution of
the commands below will also reboot the Salt Master node.
Caution
All hardware nodes must be rebooted after executing the commands below. If the nodes do not reboot for a long time, execute the below commands again or reboot the nodes manually.
Verify that you have a possibility to log in to nodes through IPMI in case of emergency.
For KVM nodes:
salt --async -C 'I@salt:control' cmd.run 'salt-call state.sls \
linux.system.repo,linux.system.user,openssh,linux.network;reboot'
For compute nodes:
salt --async -C 'I@nova:compute' pkg.install bridge-utils,vlan
salt --async -C 'I@nova:compute' cmd.run 'salt-call state.sls \
linux.system.repo,linux.system.user,openssh,linux.network;reboot'
For gateway nodes, execute the following command only for the deployments with OVS setup with physical gateway nodes:
salt --async -C 'I@neutron:gateway' cmd.run 'salt-call state.sls \
linux.system.repo,linux.system.user,openssh,linux.network;reboot'
The targeted KVM, compute, and gateway nodes will stop responding after a couple of minutes. Wait until all of the nodes reboot.
Verify that the targeted nodes are up and running:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway or I@ceph:osd' \
test.ping
Check the previously created trigger file to verify that the targeted nodes are actually rebooted:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway' \
cmd.run 'if [ -f "/run/is_rebooted" ];then echo "Has not been rebooted!";else echo "Rebooted";fi'
All nodes should be in the Rebooted
state.
Verify that the hardware nodes have the required network configuration. For example, verify the output of the ip a command:
salt -C 'I@salt:control or I@nova:compute or I@neutron:gateway or I@ceph:osd' \
cmd.run "ip a"