The Salt Master node acts as a central control point for the clients that are called Salt minion nodes. The minions, in their turn, connect back to the Salt Master node.
This section describes how to set up a virtual machine with Salt Master, MAAS provisioner, Jenkins server, and local Git server. The procedure is applicable to both online and offline MCP deployments.
To deploy the Salt Master node:
Log in to the Foundation node.
Note
Root privileges are required for following steps. Execute the
commands as a root user or use sudo
.
In case of an offline deployment, replace the content of the
/etc/apt/sources.list
file with the following lines:
deb [arch=amd64] http://<local_mirror_url>/ubuntu xenial-security main universe restricted
deb [arch=amd64] http://<local_mirror_url>/ubuntu xenial-updates main universe restricted
deb [arch=amd64] http://<local_mirror_url>/ubuntu xenial main universe restricted
Create a directory for the VM system disk:
Note
You can create and use a different subdirectory in
/var/lib/libvirt/images/
. If that is the case, verify
that you specify the correct directory for the VM_*DISK
variables described in next steps.
mkdir -p /var/lib/libvirt/images/cfg01/
Download the day01
image for the cfg01
node:
wget http://images.mirantis.com/cfg01-day01-<BUILD_ID>.qcow2 -O \
/var/lib/libvirt/images/cfg01/system.qcow2
Substitute <BUILD_ID>
with the required MCP Build ID, for example,
2019.2.0
.
Copy the configuration ISO drive for the cfg01
VM provided with
the metadata model for the offline image to, for example,
/var/lib/libvirt/images/cfg01/cfg01-config.iso
.
Note
If you are using an already existing model that does not have
configuration drives, or you want to generate updated
configuration drives, for example, with an unlocked root
login
for debugging purposes, proceed with
Generate configuration drives manually.
Caution
Make sure to securely back up the configuration ISO drive image.
This image contains critical information required to re-install
your cfg01
node in case of storage failure, including master
key for all encrypted secrets in the cluster metadata model.
Failure to back up the configuration ISO image may result in loss of ability to manage MCP in certain hardware failure scenarios.
cp /path/to/prepared-drive/cfg01-config.iso /var/lib/libvirt/images/cfg01/cfg01-config.iso
Create the Salt Master VM domain definition using the example script:
Download the shell scripts from GitHub with the required MCP release version. For example:
export MCP_VERSION="2019.2.0"
git clone https://github.com/Mirantis/mcp-common-scripts -b release/${MCP_VERSION}
Make the script executable and export the required variables:
cd mcp-common-scripts/predefine-vm/
export VM_NAME="cfg01.[CLUSTER_DOMAIN]"
export VM_SOURCE_DISK="/var/lib/libvirt/images/cfg01/system.qcow2"
export VM_CONFIG_DISK="/var/lib/libvirt/images/cfg01/cfg01-config.iso"
The CLUSTER_DOMAIN
value is the cluster domain name
used for the model. See Basic deployment parameters for details.
Note
You may add other optional variables that have default values and change them depending on your deployment configuration. These variables include:
VM_MGM_BRIDGE_NAME="br-mgm"
VM_CTL_BRIDGE_NAME="br-ctl"
VM_MEM_KB="12589056"
VM_CPUS="4"
The recommended VM_MEM_KB
for the Salt Master node is 12589056
(or more depending on your cluster size) that is 12 GB of RAM.
For large clusters, you should also increase VM_CPUS
.
The recommended VM_MEM_KB
for the local mirror node is 8388608
(or more) that is 8 GB of RAM.
The br-mgm
and br-ctl
values are the names of the Linux bridges.
See Prerequisites for MCP DriveTrain deployment for details.
Custom names can be passed to a VM definition using the
VM_MGM_BRIDGE_NAME
and VM_CTL_BRIDGE_NAME
variables accordingly.
Run the shell script:
./define-vm.sh
Start the Salt Master node VM:
virsh start cfg01.[CLUSTER_DOMAIN]
Log in to the Salt Master virsh console with the user name and password that you created in step 4 of the Generate configuration drives manually procedure:
virsh console cfg01.[CLUSTER_DOMAIN]
If you use local repositories, verify that mk-pipelines
are present
in /home/repo/mk
and pipeline-library
is present in
/home/repo/mcp-ci
after cloud-init
finishes.
If not, fix the connection to local repositories and run the
/var/lib/cloud/instance/scripts/part-001
script.
Verify that the following states are successfully applied during
the execution of cloud-init
:
salt-call state.sls linux.system,linux,openssh,salt
salt-call state.sls maas.cluster,maas.region,reclass
Otherwise, fix the pillar and re-apply the above states.
In case of using kvm01
as the Foundation node, perform the
following steps on it:
Depending on the deployment type, proceed with one of the options below:
For an online deployment, add the following deb
repository to
/etc/apt/sources.list.d/mcp_saltstack.list
:
deb [arch=amd64] https://mirror.mirantis.com/<MCP_VERSION>/saltstack-2017.7/xenial/ xenial main
For an offline deployment or local mirrors case, in
/etc/apt/sources.list.d/mcp_saltstack.list
, add the following
deb
repository:
deb [arch=amd64] http://<local_mirror_url>/<MCP_VERSION>/saltstack-2017.7/xenial/ xenial main
Install the salt-minion
package.
Modify /etc/salt/minion.d/minion.conf
:
id: <kvm01_FQDN>
master: <Salt_Master_IP_or_FQDN>
Restart the salt-minion
service:
service salt-minion restart
Check the output of salt-key
command on the Salt Master node to
verify that the minion ID of kvm01
is present.