This section instructs you on how to configure your existing cluster model to enable the management of the offline mirror VM through the Salt Master node.
Warning
Perform the procedure below only in case of an offline deployment or when using a local mirror from the prebuilt image.
To configure the APT node management in the Reclass model:
Verify that you have completed Enable the management of the APT node through the Salt Master node.
Log in to the Salt Master node.
Open the cluster level of your Reclass model.
In infra/config/nodes.yml
, add the following pillars:
parameters:
reclass:
storage:
node:
aptly_server_node01:
name: ${_param:aptly_server_hostname}01
domain: ${_param:cluster_domain}
classes:
- cluster.${_param:cluster_name}.infra
- cluster.${_param:cluster_name}.infra.mirror
- system.linux.system.repo.mcp.apt_mirantis.extra
- system.linux.system.repo.mcp.apt_mirantis.ubuntu
- system.linux.system.repo.mcp.apt_mirantis.docker
params:
salt_master_host: ${_param:reclass_config_master}
linux_system_codename: xenial
single_address: ${_param:aptly_server_control_address}
deploy_address: ${_param:aptly_server_deploy_address}
If the offline mirror VM is in the full offline mode and does not have
the infra/mirror
path, create the infra/mirror/init.yml
file with
the following contents:
classes:
- service.docker.host
- system.git.server.single
- system.docker.client
parameters:
linux:
network:
interface:
ens3: ${_param:single_address}
For a complete example of the mirror content per MCP release, refer to
init.yml
located at
https://github.com/Mirantis/mcp-local-repo-model/blob/<BUILD_ID>/
tagged with a corresponding Build ID.
Add the following pillars to infra/init.yml
or verify that they are
present in the model:
parameters:
linux:
network:
host:
apt:
address: ${_param:aptly_server_deploy_address}
names:
- ${_param:aptly_server_hostname}
- ${_param:aptly_server_hostname}.${_param:cluster_domain}
Check out your inventory to be able to resolve any inconsistencies in your model:
reclass-salt --top
Use the system response of the reclass-salt --top command to define the missing variables and specify proper environment-specific values if any.
Generate the storage Reclass definitions for your offline image node:
salt-call state.sls reclass.storage -l debug
Synchronize pillars and check out the inventory once again:
salt '*' saltutil.refresh_pillar
reclass-salt --top
Verify the availability of the offline mirror VM. For example:
salt 'apt01.local-deployment.local' test.ping
If the VM does not respond, verify that Salt Master accepts the key
for the VM using the salt-key
command.