A newer version of Mirantis OpenStack documentation is available!
  • Current Version
  • 9.0 Mirantis OpenStack

Reference Architectures

Ceph Monitors

Ceph monitors (MON) manage various maps like MON map, CRUSH map, and others. The CRUSH map is used by clients to deterministically select the storage devices (OSDs) to receive copies of the data. Ceph monitor nodes manage where the data should be stored and maintain data consistency with the Ceph OSD nodes that store the actual data.

Ceph monitors implement HA using a master-master model:

  • One Ceph monitor node is designated the "leader." This is the node that first received the most recent cluster map replica.
  • Each other monitor node must sync its cluster map with the current leader.
  • Each monitor node that is already sync'ed with the leader becomes a provider; the leader knows which nodes are currently providers. The leader tells the other nodes which provider they should use to sync their data.

Ceph Monitors use the Paxos algorithm to determine all updates to the data they manage. All monitors that are in quorum have consistent up-to-date data because of this.

You can read more in Ceph documentation.

Advanced Network Configuration using Open VSwitch

The Neutron networking model uses Open VSwitch (OVS) bridges and the Linux namespaces to create a flexible network setup and to isolate tenants from each other on L2 and L3 layers. Mirantis OpenStack also provides a flexible network setup model based on Open VSwitch primitives, which you can use to customize your nodes. Its most popular feature is link aggregation. While the FuelWeb UI uses a hardcoded per-node network model, the Fuel CLI tool allows you to modify it in your own way.

Note

When using encapsulation protocols for network segmentation, take header overhead into account to avoid guest network slowdowns from packet fragmentation or packet rejection. With a physical host MTU of 1500 the maximum instance (guest) MTU is 1430 for GRE and 1392 for VXLAN. When possible, increase MTU on the network infrastructure using jumbo frames. The default OpenVSwitch behavior in Mirantis OpenStack 6.0 and newer is to fragment packets larger than the MTU. In prior versions OpenVSwitch discards packets exceeding MTU. See the Official OpenStack documentation for more information.

Reference Network Model in Neutron

The FuelWeb UI uses the following per-node network model:

  • Create an OVS bridge for each NIC except for the NIC with Admin network (for example, br-eth0 bridge for eth0 NIC) and put NICs into their bridges
  • Create a separate bridge for each OpenStack network:
    • br-ex for the Public network
    • br-prv for the Private network
    • br-mgmt for the Management network
    • br-storage for the Storage network
  • Connect each network's bridge with an appropriate NIC bridge using an OVS patch with an appropriate VLAN tag.
  • Assign network IP addresses to the corresponding bridges.

Note that the Admin network IP address is assigned to its NIC directly.

This network model allows the cluster administrator to manipulate cluster network entities and NICs separately, easily, and on the fly during the cluster life-cycle.

Adjust the Network Configuration via CLI

On a basic level, this network configuration is part of a data structure that provides instructions to the Puppet modules to set up a network on the current node. You can examine and modify this data using the Fuel CLI tool. Just download (then modify and upload if needed) the environment's 'deployment default' configuration:

[root@fuel ~]# fuel --env 1 deployment default
directory /root/deployment_1 was created
Created /root/deployment_1/compute_1.yaml
Created /root/deployment_1/controller_2.yaml
[root@fuel ~]# vi ./deployment_1/compute_1.yaml
[root@fuel ~]# fuel --env 1 deployment --upload

Note

Read Fuel CLI reference in Fuel User Guide.

The part of this data structure that describes how to apply the network configuration is the 'network_scheme' key in the top-level hash of the YAML file. Let's take a closer look at this substructure. The value of the 'network_scheme' key is a hash with the following keys:

  • interfaces - A hash of NICs and their low-level/physical parameters. You can set an MTU feature here.
  • provider - Set to 'ovs' for Neutron.
  • endpoints - A hash of network ports (OVS ports or NICs) and their IP settings.
  • roles - A hash that specifies the mappings between the endpoints and internally-used roles in Puppet manifests ('management', 'storage', and so on).
  • transformations - An ordered list of OVS network primitives.

See the example of a network_scheme section in a node's configuration, showing how to change MTU parameters: the MTU parameters configuration example.

The "Transformations" Section

You can use four OVS primitives:

  • add-br - To add an OVS bridge to the system
  • add-port - To add a port to an existent OVS bridge
  • add-bond - To create a port in OVS bridge and add aggregated NICs to it
  • add-patch - To create an OVS patch between two existing OVS bridges

The primitives will be applied in the order they are listed.

Here are the the available options:

{
  "action": "add-br",         # type of primitive
  "name": "xxx",              # unique name of the new bridge
  "provider": "ovs"           # type of provider `linux` or `ovs`
},
{
  "action": "add-port",       # type of primitive
  "name": "xxx-port",         # unique name of the new port
  "bridge": "xxx",            # name of the bridge where the port should be created
  "type": "internal",         # [optional; default: "internal"] a type of OVS
                              # interface # for the port (see OVS documentation);
                              # possible values:
                              # "system", "internal", "tap", "gre", "null"
  "tag": 0,                   # [optional; default: 0] a 802.1q tag of traffic that
                              # should be captured from an OVS bridge;
                              # possible values: 0 (means port is a trunk),
                              # 1-4094 (means port is an access)
  "trunks": [],               # [optional; default: []] a set of 802.1q tags
                              # (integers from 0 to 4095) that are allowed to
                              # pass through if "tag" option equals 0;
                              # possible values: an empty list (all traffic passes),
                              # 0 (untagged traffic only), 1 (strange behaviour;
                              # shouldn't be used), 2-4095 (traffic with this
                              # tag passes); e.g. [0,10,20]
  "port_properties": [],      # [optional; default: []] a list of additional
                              # OVS port properties to modify them in OVS DB
  "interface_properties": [], # [optional; default: []] a list of additional
                              # OVS interface properties to modify them in OVS DB
},
{
  "action": "add-bond",       # type of primitive
  "name": "xxx-port",         # unique name of the new bond
  "interfaces": [],           # a set of two or more bonded interfaces' names;
                              # e.g. ['eth1','eth2']
  "bridge": "xxx",            # name of the bridge where the bond should be created
  "tag": 0,                   # [optional; default: 0] a 802.1q tag of traffic which
                              # should be catched from an OVS bridge;
                              # possible values: 0 (means port is a trunk),
                              # 1-4094 (means port is an access)
  "trunks": [],               # [optional; default: []] a set of 802.1q tags
                              # (integers from 0 to 4095) which are allowed to
                              # pass through if "tag" option equals 0;
                              # possible values: an empty list (all traffic passes),
                              # 0 (untagged traffic only), 1 (strange behaviour;
                              # shouldn't be used), 2-4095 (traffic with this
                              # tag passes); e.g. [0,10,20]
  "properties": [],           # [optional; default: []] a list of additional
                              # OVS bonded port properties to modify them in OVS DB;
                              # you can use it to set the aggregation mode and
                              # balancing # strategy, to configure LACP, and so on
                              # (see the OVS documentation)
},
{
  "action": "add-patch",      # type of primitive
  "bridges": ["br0", "br1"],  # a pair of different bridges that will be connected
  "peers": ["p1", "p2"],      # [optional] abstract names for each end of the patch
  "tags": [0, 0] ,            # [optional; default: [0,0]] a pair of integers that
                              # represent an 802.1q tag of traffic that is
                              # captured from an appropriate OVS bridge; possible
                              # values: 0 (means port is a trunk), 1-4094 (means
                              # port is an access)
  "trunks": [],               # [optional; default: []] a set of 802.1q tags
                              # (integers from 0 to 4095) which are allowed to
                              # pass through each bridge if "tag" option equals 0;
                              # possible values: an empty list (all traffic passes),
                              # 0 (untagged traffic only), 1 (strange behavior;
                              # shouldn't be used), 2-4095 (traffic with this
                              # tag passes); e.g., [0,10,20]
}

A combination of these primitives allows you to make custom and complex network configurations.

NICs Aggregation

The NIC bonding allows you to aggregate multiple physical links to one link to increase speed and provide fault tolerance.

Documentation

  • The Linux kernel documentation about bonding can be found in Linux Ethernet Bonding Driver HOWTO
  • You can find shorter introduction to bonding and tips on link monitoring here
  • Cisco switches configuration guide
  • Switches configuration tips for Fuel can be found here

Types of Bonding

Open VSwitch supports the same bonding features as the Linux kernel. Fuel supports bonding either via Open VSwitch or by using Linux native bonding interfaces. By default, Fuel uses the Linux native bonding provider.

Linux supports two types of bonding:

  • IEEE 802.1AX (formerly known as 802.3ad) Link Aggregation Control Protocol (LACP). Devices on both sides of links must communicate using LACP to set up an aggregated link. So both devices must support LACP, enable and configure it on these links.
  • One side bonding does not require any special feature support from the switch side. Linux handles it using a set of traffic balancing algorithms.

One Side Bonding Policies:

  • Balance-rr - Round-robin policy. This mode provides load balancing and fault tolerance.
  • Active-backup - Active-backup policy: Only one slave in the bond is active.This mode provides fault tolerance.
  • Balance-xor - XOR policy: Transmit based on the selected transmit hash policy. This mode provides load balancing and fault tolerance.
  • Broadcast - Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.
  • balance-tlb - Adaptive transmit load balancing based on a current links' utilization. This mode provides load balancing and fault tolerance.
  • balance-alb - Adaptive transmit and receive load balancing based on the current links' utilization. This mode provides load balancing and fault tolerance.
  • balance-slb - Modification of balance-alb mode. SLB bonding allows a limited form of load balancing without the remote switch's knowledge or cooperation. SLB assigns each source MAC+VLAN pair to a link and transmits all packets from that MAC+VLAN through that link. Learning in the remote switch causes it to send packets to that MAC+VLAN through the same link.
  • balance-tcp - Adaptive transmit load balancing among interfaces.

LACP Policies:

  • Layer2 - Uses XOR of hardware MAC addresses to generate the hash.
  • Layer2+3 - uses a combination of layer2 and layer3 protocol information to generate the hash.
  • Layer3+4 - uses upper layer protocol information, when available, to generate the hash.
  • Encap2+3 - uses the same formula as layer2+3 but it relies on skb_flow_dissect to obtain the header fields which might result in the use of inner headers if an encapsulation protocol is used. For example this will improve the performance for tunnel users because the packets will be distributed according to the encapsulated flows.
  • Encap3+4 - Similar to Encap2+3 but uses layer3+4.

Policies Supported by Fuel

Fuel supports the following policies: 802.3ad (LACP), balance-rr, active-backup. In addition, balance-xor, broadcast, balance-tlb, and balance-alb are supported in experimental mode.

Note

LACP rate can be set in Fuel web UI for the 802.3ad (LACP) mode. The possible values of the LACP rate are: slow and fast.

These interfaces can be configured in Fuel UI when nodes are being added tho the environment or by using Fuel CLI and editing YAML configuration manually.

You can specify the transmit hash policy using Fuel web UI. You can choose from layer2, layer2+3, layer3+4, encap2+3, encap3+4 values for 802.3ad, balance-xor, balance-tlb, and balance-alb modes.

Note

The balance-xor, balance-tlb, and balance-alb modes are supported in the experimental mode.

Network Verification in Fuel

Fuel has limited network verification capabilities when working with bonds. Network connectivity can be checked for the new cluster only (not for deployed one) so check is done when nodes are in bootstrap and no bonds are up. Connectivity between slave interfaces can be checked but not bonds themselves.

An Example of NIC Aggregation using Fuel CLI tools

Suppose you have a node with 4 NICs and you want to bond two of them with LACP enabled ("eth2" and "eth3" here) and then assign Private and Storage networks to them. The Admin network uses a dedicated NIC ("eth0"). The Management and Public networks use the last NIC ("eth1").

To create bonding interface using Open vSwitch, do the following:

  • Create a separate OVS bridge "br-bondnew" instead of "br-eth2" and "br-eth3".
  • Connect "eth2" and "eth3" to "br-bondnew" as a bonded port with property "lacp=active".
  • Connect "br-prv" and "br-storage" bridges to "br-bondnew" by OVS patches.
  • Leave all of the other things unchanged.

See the example of OVS network scheme section in the node configuration.

If you are going to use Linux native bonding, follow these steps:

  • Create a new interface "bondnew" instead of "br-eth2" and "br-eth3".

  • Connect "eth2" and "eth3" to "bondnew" as a bonded port.

  • Add 'provider': 'lnx' to choose Linux native mode.

  • Add properties as a hash instead of an array used in ovs mode. Properties are same as options used during the bonding kernel modules loading. You should provide which mode this bonding interface should use. Any other options are not mandatory. You can find all these options in the Linux Kernel Documentation.

    'properties':

    'mode': 1

  • Connect "br-prv" and "br-storage" bridges to "br-bondnew" by OVS patches.

  • Leave all of the other things unchanged.

See the example Linux network scheme.

How the Operating System Role is provisioned

Fuel provisions the Operating System role with either the CentOS or Ubuntu operating system that was selected for the environment but Puppet does not deploy other packages on this node or provision the node in any way.

The Operating System role is defined in the openstack.yaml file; the internal name is base-os. Fuel installs a standard set of operating system packages similar to what it installs on other roles; use the dpkg -l command on Ubuntu or the rpm -qa command on CentOS to see the exact list of packages that are installed.

A few configurations are applied to an Operating System role. For environments provisioned with the traditional tools, these configurations are applied by Cobbler snippets that run during the provisioning phase. When using image-based provisioning, cloud init applies these configurations. These include:

  • Disk partitioning. The default partitioning allocates a small partition (about 15GB) on the first disk for the root partition and leaves the rest of the space unallocated; users can manually allocate the remaining space.
  • The public key that is assigned to all target nodes in the environment
  • The kernel parameters that are applied to all target nodes
  • Network settings. Configure the Admin logical networks with a static IP address. No other networking is configured.

The following configurations that are set in the Fuel Web UI have no effect on the Operating System role:

  • Mapping of logical networks to physical interfaces. All connections for the logical networks. that connect this node to the rest of the environment need to be defined.

Image Based Provisioning

Operating systems are usually distributed with their own installers (e.g. Anaconda or Debian-installer). Fuel 7.0 does not use these installers. Instead, Fuel 7.0 uses image based provisioning, which is a faster and enterprise-ready method.

Whereas installers like Anaconda or Debian-installer were used in older Fuel versions to build the operating system from scratch on each node using online or local repositories, with image based provisioning a base image is created and copied to each node to be used to deploy the operating system on the local disks.

Image based provisioning significantly reduces the time required for provisioning and it is more reliable to copy the same image on all nodes instead of building an operating system from scratch on each node.

Image based provisioning is implemented using the Fuel Agent. The image based provisioning process consists of two independent steps, which are:

  1. Operating system image building.

This step assumes that we build an operating system image from a set of repositories in a directory which is then packed into the operating system image. The build script is run once no matter how many nodes one is going to deploy.

Ubuntu images are built on the master node, one operating system image per environment. We need to build different images for each environment because each environment has its own set of repositories. In order to deal with package differences between repository sets, we create an operating system image for each environment. When the user clicks the "Deploy changes" button, we check if the operating system package is already available for a particular environment, and if it is not, we build a new one just before starting the actual provisioning.

  1. Copying of operating system image to nodes.

Operating system images that have been built can be downloaded via HTTP from the Fuel Master node. So, when a node is booted into the so called Bootstrap operating system, we can run an executable script to download the necessary operating system image and put it on a hard drive. We don't need to reboot the node into the installer OS like we do when we use an Anaconda or Debian-installer. Our executable script in this case plays the same role. We just need it to be installed into the Bootstrap operating system.

For both of these steps we have a special program component which is called Fuel Agent. Fuel Agent is nothing more than just a set of data driven executable scripts. One of these scripts is used for building operating system images and we run this script on the master node passing a set of repository URIs and a set of package names to it. Another script is used for the actual provisioning. We run it on each node and pass provisioning data to it. These data contain information about disk partitions, initial node configuration, operating system image location, etc. So, this script being run on a node, prepares disk partitions, downloads operating system images and puts these images on partitions. It is necessary to note that when we say operating system image we actually mean a set of images, one per file system. If, for example, we want / and /boot be two separate file systems, then this means we need to separate the operating system images, one for / and another for /boot. Images in this case are binary copies of corresponding file systems.

Fuel Agent

Fuel Agent is a set of data driven executable scripts. It is written in Python. Its high level architecture is depicted below:

_images/fuel-agent-architecture.png

When we run one of its executable entry, we pass the input data to it where it is written what needs to be done and how. We also point out which data driver it needs to use in order to parse these input data. For example:

/usr/bin/provision --input_data_file /tmp/provision.json --data_driver nailgun

The heart of Fuel Agent is the manager fuel_agent/manager.py, which does not directly understand input data, but it does understand sets of Python objects defined in fuel_agent/objects. Data driver is the place where raw input data are converted into a set of objects. Using this set of objects manager then does something useful like creating partitions, building operating system images, etc. But the manager implements only high-level logic for all these cases and uses a low-level utility layer which is defined in fuel_agent/utils to perform real actions like launching parted or mkfs commands.

The Fuel Agent config file is located in /etc/fuel-agent/fuel-agent.conf. There are plenty of configuration parameters that can be set and all these parameters have default values which are defined in the source code. All configuration parameters are well commented.

The Fuel Agent leverages cloud-init for the Image based deployment process. It also creates a cloud-init drive which allows for post-provisioning configuration. The config drive uses jinja2 templates which can be found in /usr/share/fuel-agent/cloud-init-templates. These templates are filled with values given from the input data.

Image building

When Ubuntu based environment is being provisioned, there is a pre-provisioning task which runs the /usr/bin/fa_build_image script. This script is one of the executable Fuel Agent entry points. As input data we pass a list of Ubuntu repositories from which an operating system image is built and some other metadata. When launched, Fuel Agent checks if there is a Ubuntu image available for this environment and if there is not, it builds an operating system image and puts this image in a directory defined in the input data so as to make it available via HTTP.

Operating system provisioning

The Fuel Agent is installed into a bootstrap ramdisk. An operating system can easily be installed on a node if the node has been booted with this ramdisk. We can simply run the /usr/bin/provision executable with the required input data to start provisioning. This allows provisioning to occur without a reboot.

The input data need to contain at least the following information:

  • Partitioning scheme for the node. This scheme needs to contain information about the necessary partitions and on which disks we need to create these partitions, information about the necessary LVM groups and volumes, about software raid devices. This scheme contains also information about on which disk a bootloader needs to be installed and about the necessary file systems and their mount points. On some block devices we are assumed to put operating system images (one image per file system), while on other block devices we need to create file systems using the mkfs command.
  • Operating system images URIs. Fuel Agent needs to know where to download the images and which protocol to use for this (by default, HTTP is used).
  • Data for initial node configuration. Currently, we use cloud-init for the initial configuration and Fuel Agent prepares the cloud-init config drive which is put on a small partition at the end of the first hard drive. Config drive is created using jinja2 templates which are to be filled with values given from the input data. After the first reboot, cloud-init is run by upstart or similar. It then finds this config drive and configures services like NTP, MCollective, etc. It also performs an initial network configuration to make it possible for Fuel to access this particular node via SSH or MCollective and run Puppet to perform the final deployment.

The sequence diagram is below:

_images/fuel-agent-sequence.png

Viewing the control files on the Fuel Master node

To view the contents of the bootstrap ramdisk, run the following commands on the Fuel Master node:

cd /var/www/nailgun/bootstrap
mkdir initramfs
cd initramfs
gunzip -c ../initramfs.img | cpio -idv

You are now in the root file system of the ramdisk and can view the files that are included in the bootstrap node. For example:

cat /etc/fuel-agent/fuel-agent.conf

Troubleshooting image-based provisioning

The following files provide information for analyzing problems with the Fuel Agent provisioning.

  • Bootstrap
    • etc/fuel-agent/fuel-agent.conf -- main configuration file for the Fuel Agent, defines the location of the provision data file, data format and log output, whether debugging is on or off, and so forth.
    • tmp/provision.json -- Astute puts this file on a node (on the in-memory file system) just before running the provision script.
    • usr/bin/provision -- executable entry point for provisioning. Astute runs this; it can also be run manually.
  • Master
    • var/log/remote/node-N.domain.tld/bootstrap/fuel-agent.log -- this is where Fuel Agent log messages are recorded when the provision script is run; <N> is the node ID of the provisioned node.

Task-based deployment

Task schema

Tasks that are used to build a deployment graph can be grouped according to the common types:

- id: [graph_node_id, which is a name of the task]
  version: [a version of the tasks graph execution engine]
  type: [one of: stage, group, skipped, puppet, shell]
  role: [matches roles for which this tasks should be executed]
  groups: [multi-roles assigned to the task, mutual exclusive to the role]
  requires: [requirements for a specific task or stage]
  required_for: [specifies which tasks/stages depend on this task]
  reexecute_on: [makes the task to rerun after a given task is done]
  cross-depended-by: [establishes synchronization points across concurrent
  or asynchronous tasks, may be used only with the version 2.0.0]
  cross-depends: [reverse to the cross-depended-by, both may be a regex]
  condition: [describes the task limitaions like conflicting UI settings]
  parameters: [the task execution parameters like a script or a manifest]

Note that the role may be a wildcard '*' to match any of multi-roles assigned to a node. When matched, the task will be executed only once per each deploy run.

The groups may be put as regular expressions to match all of the assigned multi-roles. For example, /.*/ will match all multi-roles including custom ones from installed plugins, if any. When matched, the task will be executed for each within the deploy run in progress.

The cross-depended-by and cross-depends may not be lists, use separate name: entries to specify multiple dependencies.

Stages

Stages are used to build a graph skeleton. The skeleton is then extended with additional functionality like provisioning, etc.

The deployment graph has the following stages:

- pre_deployment_start
- pre_deployment_end
- deploy_start
- deploy_end
- post_deployment_start
- post_deployment_end

Here is the stage example:

- id: deploy_end
  type: stage
  requires: [deploy_start]

Groups

Groups are a representation of roles in the main deployment graph:

- id: controller
  type: group
  role: [controller]
  requires: [primary-controller]
  required_for: [deploy_end]
  parameters:
    strategy:
      type: parallel
        amount: 6

Note

Primary-controller should be installed when Controller starts its own execution. The execution of this group should be finished to consider deploy_end done.

Here is an example of the full graph of groups:

_images/groups.png

Relation of roles to groups in the execution flow is depicted below:

_images/role_and_groups.png

Strategy

You can also specify a strategy for groups in the parameters section. Fuel supports the following strategies:

  • parallel - all nodes in this group will be executed in parallel. If there are other groups that do not depend on each other, they will be executed in parallel as well. For example, Cinder and Compute groups.
  • parallel by amount - run in parallel by a specified number. For example, amount: 6.
  • one_by_one - deploy all nodes in this group in a strict one-by-one succession.

Skipped

Making a task skipped will guarantee that this task will not be executed, but all the task's depdendencies will be preserved:

- id: netconfig
  type: skipped
  groups: [primary-controller, controller, cinder, compute, ceph-osd,
           zabbix-server, primary-mongo, mongo]
  required_for: [deploy_end]
  requires: [logging]
  parameters:
    puppet_manifest: /etc/puppet/modules/osnailyfacter/other_path/netconfig.pp
    puppet_modules: /etc/puppet/modules
    timeout: 3600

Puppet

Task of type: puppet is the preferable way to execute the deployment code on nodes. Only MCollective agent is capable of executing code in background.

In Fuel this is the only task that can be used in the main deployment stages, between deploy_start and deploy_end.

Example:

- id: netconfig
    type: puppet
    groups: [primary-controller, controller, cinder, compute, ceph-osd,
             zabbix-server, primary-mongo, mongo]
    required_for: [deploy_end]
    requires: [logging]
    parameters:
      puppet_manifest: /etc/puppet/modules/osnailyfacter/other_path/netconfig.pp
      puppet_modules: /etc/puppet/modules
      timeout: 3600

Shell

Shell tasks should be used outside of the main deployment procedure. Basically, shell tasks will just execute the blocking command on specified roles.

Example:

- id: enable_quorum
  type: shell
  role: [primary-controller]
  requires: [post_deployment_start]
  required_for: [post_deployment_end]
  parameters:
    cmd: ruby /etc/puppet/modules/osnailyfacter/modular/astute/enable_quorum.rb
    timeout: 180

Upload file

This task will upload data specified in data parameters to the path destination:

- id: upload_data_to_file
  type: upload_file
  role: '*'
  requires: [pre_deployment_start]
  parameters:
    path: /etc/file_name
    data: 'arbitrary info'

Sync

Sync task will distribute files from src direcory on the Fuel Master node to dst directory on target hosts that will be matched by role:

- id: rsync_core_puppet
  type: sync
  role: '*'
  required_for: [pre_deployment_end]
  requires: [upload_core_repos]
  parameters:
    src: rsync://<FUEL_MASTER_IP>:/puppet/
    dst: /etc/puppet
    timeout:

Copy files

Task with copy_files type will read data from src and save it in the file specified in dst argument. Permissions can be specified for a group of files, as provided in example:

- id: copy_keys
  type: copy_files
  role: '*'
  required_for: [pre_deployment_end]
  requires: [generate_keys]
  parameters:
    files:
      - src: /var/lib/fuel/keys/{CLUSTER_ID}/neutron/neutron.pub
        dst: /var/lib/astute/neutron/neutron.pub
    permissions: '0600'
    dir_permissions: '0700'

API

If you want to change or add some tasks right on the Fuel Master node, just add the tasks.yaml file and respective manifests in the folder for the release that you are interested in. Then run the following command:

fuel rel --sync-deployment-tasks --dir /etc/puppet

If you want to overwrite the deployment tasks for any specific release/environment, use the following commands:

fuel rel --rel <id> --deployment-tasks --download
fuel rel --rel <id> --deployment-tasks --upload

fuel env --env <id> --deployment-tasks --download
fuel env --env <id> --deployment-tasks --upload

After this is done, you will be able to run a customized graph of tasks. To do that, use a basic command:

fuel node --node <1>,<2>,<3> --tasks upload_repos netconfig

The developer will need to specify the nodes that should be used in deployment and task IDs. The order in which these are provided does not matter. It will be computed from the dependencies specified in the database.

Note

The node will not be executed, if a task is mapped to Controller role, but the node where you want to apply the task does not have this role.

Skipping tasks

Use the skip parameter to skip tasks:

fuel node --node <1>,<2>,<3> --skip netconfig hiera

The list of tasks specified with the skip parameter will be skipped during graph traversal in Nailgun.

If there are task dependencies, you may want to make use of a "smarter" traversal -- you will need to specify the start and end nodes in the graph:

fuel node --node <1>,<2>,<3> --end netconfig

This will deploy everything up to the netconfig task, including it. This means, that this command will deploy all tasks that are a part of pre_deployment: keys generation, rsync manifests, sync time, upload repos, including such tasks as hiera setup, globals computation and maybe some other basic preparatory tasks:

fuel node --node <1>,<2>,<3> --start netconfig

Start from netconfig task (including it), deploy all the tasks that are a part of post_deployment.

For example, if you want to execute only the netconfig successors, use:

fuel node --node <1>,<2>,<3> --start netconfig --skip netconfig

You will also be able to use start and end at the same time:

fuel node --node <1>,<2>,<3> --start netconfig --end upload_cirros

Nailgun will build a path that includes only necessary tasks to join these two points.

Graph representation

Beginning with Fuel 6.1, in addition to the commands above, there also exists a helper that allows to download deployment graph in DOT DOT format and later render it.

Commands for downloading graphs

Use the following commands to download graphs:

  • To download a full graph for environment with ID 1 and print it on the screen, use the command below. Note, that it will print its output to stdout.

    fuel graph --env <1> --download
    
  • To download the graph and save it to the graph.gv file:

    fuel graph --env <1> --download > graph.gv
    
  • It is also possible to specify the same options as for the deployment command. Point out the start and end nodes in the graph:

    fuel graph --env <1> --download --start netconfig > graph.gv
    
    fuel graph --env <1> --download --end netconfig > graph.gv
    
  • You can also specify both:

    fuel graph --env <1> --download --start netconfig --end upload_cirros > graph.gv
    
  • To skip the tasks (they will be grayed out in the graph visualization), use:

    fuel graph --env <1> --download --skip netconfig hiera  > graph.gv
    
  • To completely remove the skipped tasks from graph visualization, use the --remove parameter:

    fuel graph --env <1> --download --start netconfig --end upload_cirros --remove skipped > graph.gv
    
  • To see only the parents of a particular tasks:

    fuel graph --env 1 --download --parents-for hiera  > graph.gv
    

Commands for rendering graphs

  • Downloaded graph in DOT format can be rendered. It requires additional packages to be installed:

    • Graphviz using apt-get install graphviz or yum install graphviz commands.
    • pydot-ng using pip install pydot-ng command or pygraphviz using pip install pygraphviz command.
  • After installing the packages, you can render the graph using the command below. It will take the contents of graph.gv file, render it as a PNG image and save as graph.gv.png.

    fuel graph --render graph.gv
    
  • To read graph representation from the st, use:

    fuel graph --render -
    
  • To avoid creating an intermediate file when downloading and rendering graph, you can combine both commands:

    fuel graph --env <1> --download | fuel graph --render -
    

FAQ

What can I use for deployment with groups?

You can only use Puppet for the main deployment.

All agents, except for Puppet, work in a blocking way. The current deployment model cannot execute some tasks that are blocking and non-blocking.

In the pre_deployment and post_deployment stages, any of the supported task drivers can be used.

Is it possible to specify cross-dependencies between groups?

In Fuel 6.0 or earlier, there is no model that will allow to run tasks on a primary Controller, then run on a Controller with getting back to the primary Controller.

In Fuel 6.1 and newer, the cross-dependencies are resolved by the post_deployment stage.

How can I end at the provision state?

Provision is not a part of task-based deployment.

How to stop deployment at the network configuration state?

You can use the following Fuel CLI command:

fuel node --node <1>,<2>,<3> --end netconfig

The command executes the deployment up to the network configuration¬ step.

Additional task for an existing role

If you would like to add an extra task for an existing role, follow these steps:

  1. Add the task description to /etc/puppet/2014.2.2-6.1/modules/my_tasks.yaml file.

    - id: my_task
      type: puppet
      groups: [compute]
      required_for: [deploy_end]
      requires: [netconfig]
      parameters:
        puppet_manifest: /etc/puppet/modules/my_task.pp
        puppet_modules: /etc/puppet/modules
        timeout: 3600
    
  2. Run the following command:

    fuel rel --sync-deployment-tasks --dir /etc/puppet/2014.2.2-6.1
    

After syncing the task to nailgun database, you will be able to deploy it on the selected groups.

Skipping task by API or by configuration

There are several mechanisms to skip a certain task.

To skip a task, you can use one of the following:

  • Change the task's type to skipped:

    - id: horizon
      type: skipped
      role: [primary-controller]
      requires: [post_deployment_start]
      required_for: [post_deployment_end]
    
  • Add a condition that is always false:

    - id: horizon
      type: puppet
      role: [primary-controller]
      requires: [post_deployment_start]
      required_for: [post_deployment_end]
      condition: 'true != false'
    
  • Do an API request:

    fuel node --node <1>,<2>,<3> --skip horizon
    

Creating a separate role and attaching a task to it

To create a separate role and attach a task to it, follow these steps:

  1. Create a file with redis.yaml with the following content:

    meta:
      description: Simple redis server
      name: Controller
    name: redis
    volumes_roles_mapping:
      - allocate_size: min
        id: os
    
  2. Create a role:

    fuel role --rel 1 --create --file redis.yaml
    
  3. After this is done, you can go to the Fuel Web UI and check if a role redis is created.

  4. You can now attach tasks to the role. First, install redis puppet module:

    puppet module install thomasvandoren-redis
    
  5. Write a simple manifest to /etc/puppet/modules/redis/example/simple_redis.pp and include redis.

  6. Create a configuration for Fuel in /etc/puppet/modules/redis/example/redis_tasks.yaml:

    # redis group
      - id: redis
        type: group
        role: [redis]
        required_for: [deploy_end]
        tasks: [globals, hiera, netconfig, install_redis]
        parameters:
          strategy:
            type: parallel
    
    # Install simple redis server
      - id: install_redis
        type: puppet
        requires: [netconfig]
        required_for: [deploy_end]
        parameters:
          puppet_manifest: /etc/puppet/modules/redis/example/simple_redis.pp
          puppet_modules: /etc/puppet/modules
          timeout: 180
    
  7. Run the following command:

    fuel rel --sync-deployment-tasks --dir /etc/puppet/2014.2.2-6.1/
    
  8. Create an OpenStack enviroment. Note the following:

    • Configure the Public network properly since redis packages are fetched from the upstream.
    • enable the Assign public network to all nodes option on the Settings tab of the Fuel Web UI.
  9. Provision the redis node:

    fuel node --node <1> --env <1> --provision
    
  10. Finish the installation on install_redis (there is no need to execute all tasks from the post_deployment stage):

    fuel node --node <1> --end install_redis
    

Swapping a task with a custom task

To swap a task with a custom one, you should change the path to the executable file:

- id: netconfig
  type: puppet
  groups: [primary-controller, controller, cinder, compute, ceph-osd, zabbix-server, primary-mongo, mongo]
  required_for: [deploy_end]
  requires: [logging]
  parameters:
      # old puppet manifest
      # puppet_manifest: /etc/puppet/modules/osnailyfacter/netconfig.pp

      puppet manifest: /etc/puppet/modules/osnailyfacter/custom_network_configuration.pp
      puppet_modules: /etc/puppet/modules
      timeout: 3600

Fuel Repository Mirroring

Starting in Mirantis OpenStack 6.1, the location of repositories now extends beyond just being local to the Fuel Master. It is now assumed that a given user will have Internet access and can download content from Mirantis and upstream mirrors. This impacts users with limited Internet access or unreliable connections.

Internet-based mirrors can be broken down into three categories:

  • Ubuntu
  • MOS DEBs
  • MOS RPMs

There are two command-line utilities, fuel-createmirror and fuel-package-updates, which can replicate the mirrors.

Use fuel-createmirror for Ubuntu and MOS DEBs packages.

Use fuel-package-updates for MOS RPMs packages.

fuel-createmirror is a utility that can be used as a backend to replicate part or all of an APT repository. It can replicate Ubuntu and MOS DEBs repositories. It uses rsync as a backend.

fuel-package-updates is a utility written in Python that can pull entire APT and YUM repositories via recursive wget or rsync. Additionally, it can update Fuel environment configurations to use a given set of configuration.

Issue the following command to check the fuel-package-updates options:

fuel-package-updates -h

Note

If you change the default password (admin) in Fuel web UI, you will need to run the utility with the --password switch, or it will fail.

See also

Documentation on MOS RPMs mirror structure.