Prerequisites for MCP DriveTrain deployment

Prerequisites for MCP DriveTrain deploymentΒΆ

Before you proceed with the actual deployment, verify that you have performed the following steps:

  1. Deploy the Foundation physical node using one of the initial versions of Ubuntu Xenial, for example, 16.04.1.

    Use any standalone hardware node where you can run a KVM-based day01 virtual machine with an access to the deploy/control network. The Foundation node will host the Salt Master node that also includes the MAAS provisioner by default. For the offline case deployment, the Foundation node will also host the mirror VM.

  2. Depending on your case, proceed with one of the following options:

    • If you do not have a deployment metadata model:

      1. Create a model using the Model Designer UI as described in Create a deployment metadata model.


        For an offline deployment, select the Offline deployment and Local repositories options under the Repositories section on the Infrastructure parameters tab.

      2. Customize the obtained configuration drives as described in Generate configuration drives manually. For example, enable custom user access.

    • If you use an already existing model that does not have configuration drives, or you want to generate updated configuration drives, proceed with Generate configuration drives manually.

  3. Configure the following bridges on the Foundation node: br-mgm for the management network and br-ctl for the control network.

    1. Log in to the Foundation node through IPMI.


      If the IPMI network is not reachable from the management or control network, add the br-ipmi bridge for the IPMI network or any other network that is routed to the IPMI network.

    2. Create PXE bridges to provision network on the foundation node:

      brctl addbr br-mgm
      brctl addbr br-ctl
    3. Install the br-ctl utility:

      apt install bridge-utils
    4. Add the bridges definition for br-mgm and br-ctl to /etc/network/interfaces. Use definitions from your deployment metadata model.


      auto br-mgm
      iface br-mgm inet static
              bridge_ports bond0
    5. Restart networking from the IPMI console to bring the bonds up.

    6. Verify that the foundation node bridges are up by checking the output of the ip a show command:

      ip a show br-ctl

      Example of system response:

      8: br-ctl: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
          link/ether 00:1b:21:93:c7:c8 brd ff:ff:ff:ff:ff:ff
          inet brd scope global br-ctl
             valid_lft forever preferred_lft forever
          inet6 fe80::21b:21ff:fe93:c7c8/64 scope link
             valid_lft forever preferred_lft forever
  4. Depending on your case, proceed with one of the following options: