This documentation provides information on how to deploy and operate a Mirantis OpenStack for Kubernetes (MOS) environment. The documentation is intended to help operators to understand the core concepts of the product. The documentation provides sufficient information to deploy and operate the solution.
The information provided in this documentation set is being constantly improved and amended based on the feedback and kind requests from the consumers of MOS.
The following table lists the guides included in the documentation set you are reading:
Guide |
Purpose |
---|---|
MOS Reference Architecture |
Learn the fundamentals of MOS reference architecture to appropriately plan your deployment |
MOS Deployment Guide |
Deploy a MOS environment of a preferred configuration using supported deployment profiles tailored to the demands of specific business cases |
MOS Operations Guide |
Operate your MOS environment |
MOS Release notes |
Learn about new features and bug fixes in the current MOS version |
The MOS documentation home page contains references to all guides included in this documentation set. For your convenience, we provide all guides in HTML (default), single-page HTML, PDF, and ePUB formats. To use the preferred format of a guide, select the required option from the Formats menu next to the guide title.
This documentation is intended for engineers who have the basic knowledge of Linux, virtualization and containerization technologies, Kubernetes API and CLI, Helm and Helm charts, Mirantis Kubernetes Engine (MKE), and OpenStack.
This documentation set includes description of the Technology Preview features. A Technology Preview feature provide early access to upcoming product innovations, allowing customers to experience the functionality and provide feedback during the development process. Technology Preview features may be privately or publicly available and neither are intended for production use. While Mirantis will provide support for such features through official channels, normal Service Level Agreements do not apply. Customers may be supported by Mirantis Customer Support or Mirantis Field Support.
As Mirantis considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features.
During the development of a Technology Preview feature, additional components may become available to the public for testing. Because Technology Preview features are being under development, Mirantis cannot guarantee the stability of such features. As a result, if you are using Technology Preview features, you may not be able to seamlessly upgrade to subsequent releases of that feature. Mirantis makes no guarantees that Technology Preview features will be graduated to a generally available product release.
The Mirantis Customer Success Organization may create bug reports on behalf of support cases filed by customers. These bug reports will then be forwarded to the Mirantis Product team for possible inclusion in a future release.
The following table contains the released revision of the documentation set you are reading:
Release date |
Description |
---|---|
November 05, 2020 |
MOS GA release |
December 23, 2020 |
MOS GA Update release |
This documentation set uses the following conventions in the HTML format:
Convention |
Description |
---|---|
boldface font |
Inline CLI tools and commands, titles of the procedures and system response examples, table titles |
|
Files names and paths, Helm charts parameters and their values, names of packages, nodes names and labels, and so on |
italic font |
Information that distinguishes some concept or term |
External links and cross-references, footnotes |
|
Main menu > menu item |
GUI elements that include any part of interactive user interface and menu navigation |
Superscript |
Some extra, brief information |
Note The Note block |
Messages of a generic meaning that may be useful for the user |
Caution The Caution block |
Information that prevents a user from mistakes and undesirable consequences when following the procedures |
Warning The Warning block |
Messages that include details that can be easily missed, but should not be ignored by the user and are valuable before proceeding |
See also The See also block |
List of references that may be helpful for understanding of some related tools, concepts, and so on |
Learn more The Learn more block |
Used in the Release Notes to wrap a list of internal references to the reference architecture, deployment and operation procedures specific to a newly implemented product feature |
Mirantis OpenStack for Kubernetes (MOS) is a virtualization platform that provides an infrastructure for cloud-ready applications, in combination with reliability and full control over the data.
MOS alloys OpenStack, an open-source cloud infrastructure software, with application management techniques used in Kubernetes ecosystem that include container isolation, state enforcement, declarative definition of deployments, and others.
MOS integrates with Mirantis Container Cloud to rely on its capabilities for bare-metal infrastructure provisioning, Kubernetes cluster management, and continuous delivery of the stack components.
MOS simplifies the work of a cloud operator by automating all major cloud life cycle management routines including cluster updates and upgrades.
A Mirantis OpenStack for Kubernetes (MOS) deployment profile is a thoroughly tested and officially supported reference architecture that is guaranteed to work at a specific scale and is tailored to the demands of a specific business case, such as generic IaaS cloud, Network Function Virtualisation infrastructure, Edge Computing, and others.
A deployment profile is defined as a combination of:
Services and features the cloud offers to its users.
Non-functional characteristics that users and operators should expect when running the profile on top of a reference hardware configuration. Including, but not limited to:
Performance characteristics, such as an average network throughput between VMs in the same virtual network.
Reliability characteristics, such as the cloud API error response rate when recovering a failed controller node.
Scalability characteristics, such as the total amount of virtual routers tenants can run simultaneously.
Hardware requirements - the specification of physical servers, and networking equipment required to run the profile in production.
Deployment parameters that an operator for the cloud can tweak within a certain range without being afraid of breaking the cloud or losing support.
In addition, the following items may be included in a definition:
Compliance-driven technical requirements, such as TLS encryption of all external API endpoints.
Foundation-level software components, such as Tungsten Fabric or Open vSwitch as a back end for the networking service.
Note
Mirantis reserves the right to revise the technical implementation of any profile at will while preserving its definition - the functional and non-functional characteristics that operators and users are known to rely on.
Profile |
OpenStackDeployment CR Preset |
Description |
---|---|---|
Cloud Provider Infrastructure |
|
Provides the core set of the services an IaaS vendor would need including some extra functionality. The profile is designed to support up to 30 compute nodes and as small number of storage nodes. The core set of services provided by the profile includes:
|
Cloud Provider Infrastructure with Tungsten Fabric |
|
A variation of the Cloud Provider Infrastructure profile with Tugsten Fabric as a back end for networking. |
See also
Mirantis OpenStack for Kubernetes (MOS) includes the following key design elements:
HelmBundle Operator
The HelmBundle Operator is the realization of the Kubernetes Operator
pattern that provides a Kubernetes custom resource of the HelmBundle
kind and code running inside a pod in Kubernetes. This code handles changes,
such as creation, update, and deletion, in the Kubernetes resources of this
kind by deploying, updating, and deleting groups of Helm releases from
specified Helm charts with specified values.
OpenStack
The OpenStack platform manages virtual infrastructure resources, including virtual servers, storage devices, networks, and networking services, such as load balancers, as well as provides management functions to the tenant users.
Various OpenStack services are running as pods in Kubernetes and are
represented as appropriate native Kubernetes resources, such as
Deployments
, StatefulSets
, and DaemonSets
.
For a simple, resilient, and flexible deployment of OpenStack and related services on top of a Kubernetes cluster, MOS uses OpenStack-Helm that provides a required collection of the Helm charts.
Also, MOS uses OpenStack Operator as the realization of the
Kubernetes Operator pattern. The OpenStack Operator provides a custom
Kubernetes resource of the OpenStackDeployment
kind and code running
inside a pod in Kubernetes. This code handles changes such as creation,
update, and deletion in the Kubernetes resources of this kind by
deploying, updating, and deleting groups of the HelmBundle
resources
handled by the HelmBundle Operator to manage OpenStack in
Kubernetes through the OpenStack-Helm charts.
Ceph
Ceph is a distributed storage platform that provides storage resources, such as objects and virtual block devices, to virtual and physical infrastructure.
MOS uses Rook as the implementation of the Kubernetes Operator
pattern that manages resources of the CephCluster
kind to deploy and
manage Ceph services as pods on top of Kubernetes to provide Ceph-based
storage to the consumers, which include OpenStack services, such as Volume
and Image services, and underlying Kubernetes through Ceph CSI (Container
Storage Interface).
The Ceph controller is the implementation of the Kubernetes Operator
pattern, that manages resources of the MiraCeph
kind to simplify
management of the Rook-based Ceph clusters.
StackLight Logging, Monitoring, and Alerting
The StackLight component is responsible for collection, analysis, and visualization of critical monitoring data from physical and virtual infrastructure, as well as alerting and error notifications through a configured communication system, such as email. StackLight includes the following key sub-components:
Prometheus
Elasticsearch
Fluentd
Kibana
This section provides hardware requirements for the Mirantis OpenStack for Kubernetes (MOS) cluster.
Note
A MOS managed cluster is deployed by a Mirantis Container Cloud baremetal-based management cluster. For the hardware requirements for this kind of management clusters, see Mirantis Container Cloud Reference Architecture: Reference hardware configuration.
The MOS reference architecture includes the following node types:
Host OpenStack control plane services such as database, messaging, API, schedulers conductors, and L3 and L2 agents, as well as the StackLight components.
Optional, hosts OpenStack gateway services including L2, L3, and DHCP agents. The tenant gateway nodes are combined with OpenStack control plane nodes. The strict requirement is a dedicated physical network (bond) for tenant network traffic.
Required only if Tungsten Fabric (TF) is enabled as a back end for the OpenStack networking. These nodes host the TF control plane services such as Cassandra database, messaging, API, control, and configuration services.
Required only if TF is enabled as a back end for the OpenStack networking. These nodes host the TF analytics services such as Cassandra, ZooKeeper and collector.
Hosts OpenStack Compute services such as QEMU, L2 agents, and others.
Runs underlying Kubernetes cluster management services. The MOS reference configuration requires minimum three infrastructure nodes.
The table below specifies the hardware resources the MOS reference architecture recommends for each node type.
Node type |
# of servers |
CPU cores # per server |
Memory (GB) per server |
Disk space per server |
NICs # per server |
---|---|---|---|---|---|
OpenStack control plane, gateway 0, and StackLight nodes |
3 |
32 |
128 |
2 TB SSD |
5 |
Tenant gateway (optional) |
0-3 |
32 |
128 |
2 TB SSD |
5 |
Tungsten Fabric control plane nodes 1 |
3 |
16 |
64 |
500 GB SSD |
1 |
Tungsten Fabric analytics nodes 1 |
3 |
32 |
64 |
1 TB SSD |
1 |
Compute node |
3 (varies) |
16 |
64 |
500 GB SSD |
5 |
Infrastructure node (Kubernetes cluster management) |
3 |
16 |
64 |
500 GB SSD |
5 |
Infrastructure node (Ceph) 2 |
3 |
16 |
64 |
1 SSD 500 GB and 2 HDDs 2 TB each |
5 |
Note
The exact hardware specifications and number of nodes depend on a cloud configuration and scaling needs.
OpenStack gateway services can optionally be moved to separate nodes.
TF control plane and analytics nodes can be combined with a respective addition of RAM, CPU, and disk space to the hardware hosts. Though, Mirantis does not recommend such configuration for production environments as the risk of the cluster downtime if one of the nodes unexpectedly fails increases.
A Ceph cluster with 3 Ceph nodes does not provide hardware fault tolerance and is not eligible for recovery operations, such as a disk or an entire node replacement.
A Ceph cluster uses the replication factor that equals 3. If the number of Ceph OSDs is less than 3, a Ceph cluster moves to the degraded state with the write operations restriction until the number of alive Ceph OSDs equals the replication factor again.
Note
If you are looking to try MOS and do not have much hardware at your disposal, you can deploy it in a virtual environment, for example, on top of another OpenStack cloud using the sample Heat templates.
Please mind, the tooling is provided for reference only and is not a part of the product itself. Mirantis does not guarantee its interoperability with the latest MOS version.
This section lists the infrastructure requirements for the Mirantis OpenStack for Kubernetes (MOS) reference architecture.
Service |
Description |
---|---|
MetalLB |
MetalLB exposes external IP addresses to access applications in a Kubernetes cluster. |
DNS |
The Kubernetes Ingress NGINX controller is used to expose OpenStack services outside of a Kubernetes deployment. Access to the Ingress services is allowed only by its FQDN. Therefore, DNS is a mandatory infrastructure service for an OpenStack on Kubernetes deployment. |
The OpenStack Operator component is a combination of the following entities:
The OpenStack Controller runs in a set of containers in a pod in Kubernetes. The OpenStack Controller is deployed as a Deployment with 1 replica only. The failover is provided by Kubernetes that automatically restarts the failed containers in a pod.
However, given the recommendation to use a separate Kubernetes cluster for each OpenStack deployment, the controller in envisioned mode for operation and deployment will only manage a single OpenStackDeployment resource, making the proper HA much less of an issue.
The OpenStack Controller is written in Python using Kopf, as a Python framework to build Kubernetes operators, and Pykube, as a Kubernetes API client.
Using Kubernetes API, the controller subscribes to changes to resources of
kind: OpenStackDeployment
, and then reacts to these changes by creating,
updating, or deleting appropriate resources in Kubernetes.
The basic child resources managed by the controller are of
kind: HelmBundle
. They are rendered from templates taking into account
an appropriate values set from the main
and features
fields in the
OpenStackDeployment resource.
Then, the common fields are merged to resulting data structures. Lastly, the services fields are merged providing the final and precise override for any value in any Helm release to be deployed or upgraded.
The constructed HelmBundle
resources are then supplied to the Kubernetes
API. The HelmBundle controller picks up the changes in these resources,
similarly to the OpenStack Operator, and translates to the Helm releases,
deploying, updating, or deleting native Kubernetes resources.
Container |
Description |
---|---|
|
The core container that handles changes in the |
|
The container that watches the |
|
The container that watches all Kubernetes native
resources, such as |
|
The container that provides data exchange between different components such as Ceph. |
|
The container that handles the node events. |
The CustomResourceDefinition
resource in Kubernetes uses the
OpenAPI Specification version 2 to specify the schema of the resource
defined. The Kubernetes API outright rejects the resources that do not
pass this schema validation.
The language of the schema, however, is not expressive enough to define a specific validation logic that may be needed for a given resource. For this purpose, Kubernetes enables the extension of its API with Dynamic Admission Control.
For the OpenStackDeployment (OsDpl) CR the ValidatingAdmissionWebhook
is a natural choice. It is deployed as part of OpenStack Controller
by default and performs specific extended validations when an OsDpl CR is
created or updated.
The inexhaustive list of additional validations includes:
Deny the OpenStack version downgrade
Deny the OpenStack version skip-level upgrade
Deny the OpenStack master version deployment
Deny upgrade to the OpenStack master version
Deny upgrade if any part of an OsDpl CR specification changes along with the OpenStack version
Under specific circumstances, it may be viable to disable the admission controller, for example, when you attempt to deploy or upgrade to the master version of OpenStack.
Warning
Mirantis does not support MOS deployments performed without the OpenStackDeployment admission controller enabled. Disabling of the OpenStackDeployment admission controller is only allowed in staging non-production environments.
To disable the admission controller, ensure that the following structures and
values are present in the openstack-controller
HelmBundle resource:
apiVersion: lcm.mirantis.com/v1alpha1
kind: HelmBundle
metadata:
name: openstack-operator
namespace: osh-system
spec:
releases:
- name: openstack-operator
values:
admission:
enabled: false
At that point, all safeguards except for those expressed by the CR definition are disabled.
The resource of kind OpenStackDeployment
(OsDpl) is a custom resource
(CR) defined by a resource of kind CustomResourceDefinition
. This section
is intended to provide a detailed overview of the OsDpl configuration including
the definition of its main elements as well as the configuration of extra
OpenStack services that do no belong to standard deployment profiles.
The detailed information about schema of an OpenStackDeployment
(OsDpl)
custom resource can be obtained by running:
kubectl get crd openstackdeployments.lcm.mirantis.com -oyaml
The definition of a particular OpenStack deployment can be obtained by running:
kubectl -n openstack get osdpl -oyaml
apiVersion: lcm.mirantis.com/v1alpha1
kind: OpenStackDeployment
metadata:
name: openstack-cluster
namespace: openstack
spec:
openstack_version: train
preset: compute
size: tiny
internal_domain_name: cluster.local
public_domain_name: it.just.works
features:
ssl:
public_endpoints:
api_cert: |-
# Update server certificate content
api_key: |-
# Update server private key content
ca_cert: |-
# Update CA certificate content
neutron:
tunnel_interface: ens3
external_networks:
- physnet: physnet1
interface: veth-phy
bridge: br-ex
network_types:
- flat
vlan_ranges: null
mtu: null
floating_network:
enabled: False
nova:
live_migration_interface: ens3
images:
backend: local
For the detailed description of the OsDpl main elements, see the tables below:
Element |
Description |
---|---|
|
Specifies the version of the Kubernetes API that is used to create this object. |
|
Specifies the kind of the object. |
|
Specifies the name of metadata. Should be set in compliance with the Kubernetes resource naming limitations. |
|
Specifies the metadata namespace. While technically it is possible to
deploy OpenStack on top of Kubernetes in other than Warning Both OpenStack and Kubernetes platforms provide resources to applications. When OpenStack is running on top of Kubernetes, Kubernetes is completely unaware of OpenStack-native workloads, such as virtual machines, for example. For better results and stability, Mirantis recommends using a dedicated Kubernetes cluster for OpenStack, so that OpenStack and auxiliary services, Ceph, and StackLight are the only Kubernetes applications running in the cluster. |
|
Contains the data that defines the OpenStack deployment and configuration. It has both high-level and low-level sections. The very basic values that must be provided include: spec:
openstack_version:
preset:
size:
internal_domain_name:
public_domain_name:
For the detailed description of the |
Element |
Description |
---|---|
|
Specifies the OpenStack release to deploy. |
|
String that specifies the name of the
Every supported deployment profile incorporates an OpenStack preset. Refer to sup-profiles for the list of possible values. |
|
String that specifies the size category for the OpenStack cluster. The size category defines the internal configuration of the cluster such as the number of replicas for service workers and timeouts, etc. The list of supported sizes include:
|
|
Specifies the internal DNS name used inside the Kubernetes cluster on top of which the OpenStack cloud is deployed. |
|
Specifies the public DNS name for OpenStack services. This is a base DNS name that must be accessible and resolvable by API clients of your OpenStack cloud. It will be present in the OpenStack endpoints as presented by the OpenStack Identity service catalog. The TLS certificates used by the OpenStack services (see below) must also be issued to this DNS name. |
|
Contains the top-level collections of settings for the OpenStack deployment that potentially target several OpenStack services. The section where the customizations should take place. For example, for a minimal resource with the defined features, see the Example of an OsDpl CR of minimal configuration. |
|
Contains a list of extra OpenStack services to deploy. Extra OpenStack
services are services that are not included into |
|
Contains the content of SSL/TLS certificates (server, key, CA bundle) used to enable a secure communication to public OpenStack API services. These certificates must be issued to the DNS domain specified in the
|
|
Defines the name of the NIC device on the actual host that will be used for Neutron. We recommend setting up your Kubernetes hosts in such a way that networking is configured identically on all of them, and names of the interfaces serving the same purpose or plugged into the same network are consistent across all physical nodes. |
|
Defines the list of IPs of DNS servers that are accessible from virtual networks. Used as default DNS servers for VMs. |
|
Contains the data structure that defines external (provider) networks on top of which the Neutron networking will be created. |
|
If enabled, must contain the data structure defining the floating IP network that will be created for Neutron to provide external access to your Nova instances. |
|
Specifies the name of the NIC device on the actual host that will be used by Nova for the live migration of instances. We recommend setting up your Kubernetes hosts in such a way that networking is configured identically on all of them, and names of the interfaces serving the same purpose or plugged into the same network are consistent across all physical nodes. |
|
Specifies the object containing the Vault parameters to connect to Barbican. The list of supported options includes:
|
|
Defines the type of storage for Nova to use on the compute hosts for the images that back up the instances. The list of supported options include:
|
|
Defines parameters to connect to the Keycloak identity provider. |
|
Defines the domain-specific configuration and is useful for integration
with LDAP. An example of OsDpl with LDAP integration, which will create
a separate spec:
features:
keystone:
domain_specific_configuration:
enabled: true
domains:
- name: domain.with.ldap
config:
assignment:
driver: keystone.assignment.backends.sql.Assignment
identity:
driver: ldap
ldap:
chase_referrals: false
group_allow_create: false
group_allow_delete: false
group_allow_update: false
group_desc_attribute: description
group_id_attribute: cn
group_member_attribute: member
group_name_attribute: ou
group_objectclass: groupOfNames
page_size: 0
password: XXXXXXXXX
query_scope: sub
suffix: dc=mydomain,dc=com
url: ldap://ldap01.mydomain.com,ldap://ldap02.mydomain.com
user: uid=openstack,ou=people,o=mydomain,dc=com
user_allow_create: false
user_allow_delete: false
user_allow_update: false
user_enabled_attribute: enabled
user_enabled_default: false
user_enabled_invert: true
user_enabled_mask: 0
user_id_attribute: uid
user_mail_attribute: mail
user_name_attribute: uid
user_objectclass: inetOrgPerson
|
|
Specifies the Telemetry mode, which determines the permitted actions
for the Telemetry services. The only supported value is Caution To enable the Telemetry mode, the corresponding services
including the |
|
Specifies the standard logging levels for OpenStack services that
include the following, at increasing severity: spec:
features:
logging:
nova:
level: DEBUG
|
|
Defines the list of custom OpenStack Dashboard themes. Content of the archive file with a theme depends on the level of customization and can include static files, Django templates, and other artifacts. For the details, refer to OpenStack official documentation: Customizing Horizon Themes. spec:
features:
horizon:
themes:
- name: theme_name
description: The brand new theme
url: https://<path to .tgz file with the contents of custom theme>
sha256summ: <SHA256 checksum of the archive above>
|
|
A low-level section that defines the base URI prefixes for images and binary artifacts. |
|
A low-level section that defines values that will be passed to all
OpenStack ( Structure example: spec:
artifacts:
common:
openstack:
values:
infra:
values:
|
|
A section of the lowest level, enables the definition of specific values to pass to specific Helm charts on a one-by-one basis: |
Warning
Mirantis does not recommend changing the default settings for
spec:artifacts
, spec:common
, and spec:services
elements.
Customizations can compromise the OpenStack deployment update and upgrade
processes.
Element |
Description |
---|---|
|
Contains information about the current status of an OpenStack deployment, which cannot be changed by the user. |
|
Specifies the current status of Helm releases that are managed by the OpenStack Operator. The possible values include:
An example of children output: children:
openstack-block-storage: true
openstack-compute: true
openstack-coordination: true
...
|
|
Shows an overall status of all Helm releases. Shows |
|
Is the MD5 hash of the |
|
Contains the version of the OpenStack Operator that processes
the OsDpl resource. And, similarly to |
|
While
An example of the health output: health:
barbican:
api:
generation: 4
status: Ready
rabbitmq:
generation: 1
status: Ready
cinder:
api:
generation: 4
status: Ready
backup:
generation: 2
status: Ready
rabbitmq:
generation: 1
status: Ready
...
...
|
|
Contains the structure that is used by the Kopf library to store its internal data. |
Mirantis Container Cloud uses the Identity and access management (IAM) service for users and permission management. This section describes how you can integrate your OpenStack deployment with Keycloak through the OpenID connect.
To enable integration on the OpenStack side, define the following parameters
in your openstackdeployment
custom resource:
spec:
features:
keystone:
keycloak:
enabled: true
url: <https://my-keycloak-instance>
# optionally ssl cert validation might be disabled
oidc:
OIDCSSLValidateServer: false
OIDCOAuthSSLValidateServer: false
The configuration above will trigger the creation of the os
client in
Keycloak. The role management and assignment should be configured separately
on a particular deployment.
The Bare metal (Ironic) service is an extra OpenStack service that can be deployed by the OpenStack Operator. This section provides the baremetal-specific configuration options of the OsDpl resource.
To install bare metal services, add the baremetal
keyword to the
spec:features:services
list:
spec:
features:
services:
- baremetal
Note
All bare metal services are scheduled to the nodes with the
openstack-control-plane: enabled
label.
To provision a user image onto a bare metal server, Ironic boots a node with
a ramdisk image. Depending on the node’s deploy interface and hardware, the
ramdisk may require different drivers (agents). MOS provides tinyIPA-based
ramdisk images and uses the direct
deploy interface with the ipmitool
power interface.
Example of agent_images
configuration:
spec:
features:
ironic:
agent_images:
base_url: https://binary.mirantis.com/openstack/bin/ironic/tinyipa
initramfs: tinyipa-stable-ussuri-20200617101427.gz
kernel: tinyipa-stable-ussuri-20200617101427.vmlinuz
Since the bare metal nodes hardware may require additional drivers, you may need to build a deploy ramdisk for particular hardware. For more information, see Ironic Python Agent Builder. Be sure to create a ramdisk image with the version of Ironic Python Agent appropriate for your OpenStack release.
Ironic supports the flat
and multitenancy
networking modes.
The flat
networking mode assumes that all bare metal nodes are
pre-connected to a single network that cannot be changed during the
virtual machine provisioning.
Example of the OsDpl resource illustrating the configuration for the flat
network mode:
spec:
features:
services:
- baremetal
neutron:
external_networks:
- bridge: ironic-pxe
interface: <baremetal-interface>
network_types:
- flat
physnet: ironic
vlan_ranges: null
ironic:
# The name of neutron network used for provisioning/cleaning.
baremetal_network_name: ironic-provisioning
networks:
# Neutron baremetal network definition.
baremetal:
physnet: ironic
name: ironic-provisioning
network_type: flat
external: true
shared: true
subnets:
- name: baremetal-subnet
range: 10.13.0.0/24
pool_start: 10.13.0.100
pool_end: 10.13.0.254
gateway: 10.13.0.11
# The name of interface where provision services like tftp and ironic-conductor
# are bound.
provisioning_interface: br-baremetal
The multitenancy
network mode uses the neutron
Ironic network
interface to share physical connection information with Neutron. This
information is handled by Neutron ML2 drivers when plugging a Neutron port
to a specific network. MOS supports the networking-generic-switch
Neutron
ML2 driver out of the box.
Example of the OsDpl resource illustrating the configuration for the
multitenancy
network mode:
spec:
features:
services:
- baremetal
neutron:
tunnel_interface: ens3
external_networks:
- physnet: physnet1
interface: <physnet1-interface>
bridge: br-ex
network_types:
- flat
vlan_ranges: null
mtu: null
- physnet: ironic
interface: <physnet-ironic-interface>
bridge: ironic-pxe
network_types:
- vlan
vlan_ranges: 1000:1099
ironic:
# The name of interface where provision services like tftp and ironic-conductor
# are bound.
provisioning_interface: <baremetal-interface>
baremetal_network_name: ironic-provisioning
networks:
baremetal:
physnet: ironic
name: ironic-provisioning
network_type: vlan
segmentation_id: 1000
external: true
shared: false
subnets:
- name: baremetal-subnet
range: 10.13.0.0/24
pool_start: 10.13.0.100
pool_end: 10.13.0.254
gateway: 10.13.0.11
Caution
This feature is available starting from MOS Ussuri Update.
Depending on the use case, you may need to configure the same application components differently on different hosts. MOS enables you to easily perform the required configuration through node-specific overrides at the OpenStack Controller side.
The limitation of using the node-specific overrides is that they override only the configuration settings while other components, such as startup scripts and others, should be reconfigured as well.
Caution
The overrides have been implemented in a similar way to the OpenStack node and node label specific DaemonSet configurations. Though, the OpenStack Controller node-specific settings conflict with the upstream OpenStack node and node label specific DaemonSet configurations. Therefore, we do not recommend configuring node and node label overrides.
The node-specific settings are activated through the spec:nodes
section of the OsDpl CR. The spec:nodes
section contains the following
subsections:
features
- implements overrides for a limited subset of fields and is
constructed similarly to spec::features
services
- similarly to spec::services
, enables you to override
settings in general for the components running as DaemonSets.
Example configuration:
spec:
nodes:
<NODE-LABEL>::<NODE-LABEL-VALUE>:
features:
# Detailed information about features might be found at
# openstack_controller/admission/validators/nodes/schema.yaml
services:
<service>:
<chart>:
<chart_daemonset_name>:
values:
# Any value from specific helm chart
See also
OpenStack and auxiliary services are running as containers in the kind: Pod
Kubernetes resources. All long-running services are governed by one of
the ReplicationController-enabled
Kubernetes resources, which include
either kind: Deployment
, kind: StatefulSet
, or kind: DaemonSet
.
The placement of the services is mostly governed by the Kubernetes node labels. The labels affecting the OpenStack services include:
openstack-control-plane=enabled
- the node hosting most of the OpenStack
control plane services.
openstack-compute-node=enabled
- the node serving as a hypervisor for
Nova. The virtual machines with tenants workloads are created there.
openvswitch=enabled
- the node hosting Neutron L2 agents and OpenvSwitch
pods that manage L2 connection of the OpenStack networks.
openstack-gateway=enabled
- the node hosting Neutron L3, Metadata and
DHCP agents, Octavia Health Manager, Worker and Housekeeping components.
Note
OpenStack is an infrastructure management platform. Mirantis OpenStack for Kubernetes (MOS) uses Kubernetes mostly for orchestration and dependency isolation. As a result, multiple OpenStack services are running as privileged containers with host PIDs and Host Networking enabled. You must ensure that at least the user with the credentials used by Helm/Tiller (administrator) is capable of creating such Pods.
Service |
Description |
---|---|
Storage |
While the underlying Kubernetes cluster is configured to use Ceph CSI for providing persistent storage for container workloads, for some types of workloads such networked storage is suboptimal due to latency. This is why the separate |
Database |
A single WSREP (Galera) cluster of MariaDB is deployed as the SQL
database to be used by all OpenStack services. It uses the storage class
provided by Local Volume Provisioner to store the actual database files.
The service is deployed as |
Messaging |
RabbitMQ is used as a messaging bus between the components of the OpenStack services. A separate instance of RabbitMQ is deployed for each OpenStack service that needs a messaging bus for intercommunication between its components. An additional, separate RabbitMQ instance is deployed to serve as a notification messages bus for OpenStack services to post their own and listen to notifications from other services. StackLight also uses this message bus to collect notifications for monitoring purposes. Each RabbitMQ instance is a single node and is deployed as
|
Caching |
A single multi-instance of the Memcached service is deployed to be used by all OpenStack services that need caching, which are mostly HTTP API services. |
Coordination |
A separate instance of etcd is deployed to be used by Cinder, which require Distributed Lock Management for coordination between its components. |
Ingress |
Is deployed as |
Image pre-caching |
A special This is especially useful for containers used in |
Service |
Description |
---|---|
Identity (Keystone) |
Uses MySQL back end by default.
|
Image (Glance) |
Supported back end is RBD (Ceph is required). |
Volume (Cinder) |
Supported back end is RBD (Ceph is required). |
Network (Neutron) |
Supported back end is Open vSwitch. Tungsten Fabric is available as technical preview. |
Placement |
|
Compute (Nova) |
Supported hypervisor is Qemu/KVM through libvirt library. |
Dashboard (Horizon) |
|
DNS (Designate) |
Supported back end is PowerDNS. |
Load Balancer (Octavia) |
|
RADOS Gateway Object Storage (SWIFT) Available since MOS Ussuri Update |
Contains the object store and provides a RADOS Gateway Swift API that
is compatible with the OpenStack Swift API. To enable the service,
add the spec:
features:
services:
- object-storage
To create the RADOS Gateway pool in Ceph, proceed with Container Cloud Operations Guide: Enable Ceph RGW Object Storage |
Orchestration (Heat) |
|
Key Manager (Barbican) |
The supported back ends include:
If the Vault back end is used, configure it properly using the following parameters: spec:
features:
barbican:
backends:
vault:
enabled: true
approle_role_id: <APPROLE_ROLE_ID>
approle_secret_id: <APPROLE_SECRET_ID>
vault_url: <VAULT_SERVER_URL>
use_ssl: false
Note Since MOS does not currently support the Vault SSL
encryption, the |
Tempest |
Can be added to the list of services in spec:
features:
services:
- tempest
|
Telemetry |
Telemetry services include alarming (aodh), event storage (Panko),
metering (Ceilometer), and metric (Gnocchi). All services should be
enabled together through the list of services to be deployed in the
spec:
features:
services:
- alarming
- event
- metering
- metric
|
A complete setup of a MariaDB Galera cluster for OpenStack is illustrated in the following image:
MariaDB server pods are running a Galera multi-master cluster. Clients
requests are forwarded by the Kubernetes mariadb
service to the
mariadb-server
pod that has the primary
label. Other pods from
the mariadb-server
StatefulSet have the backup
label. Labels are
managed by the mariadb-controller
pod.
The MariaDB controller periodically checks the readiness of the
mariadb-server
pods and sets the primary
label to it if the following
requirements are met:
The primary
label has not already been set on the pod.
The pod is in the ready state.
The pod is not being terminated.
The pod name has the lowest integer suffix among other ready pods in
the StatefulSet. For example, between mariadb-server-1
and
mariadb-server-2
, the pod with the mariadb-server-1
name is
preferred.
Otherwise, the MariaDB controller sets the backup
label. This means that
all SQL requests are passed only to one node while other two nodes are in
the backup state and replicate the state from the primary node.
The MariaDB clients are connecting to the mariadb
service.
The integration between Ceph and OpenStack controllers is implemented
through the shared Kubernetes openstack-ceph-shared
namespace.
Both controllers have access to this namespace to read and write
the Kubernetes kind: Secret
objects.
As Ceph is required and only supported back end for several OpenStack
services, all necessary Ceph pools must be specified in the configuration
of the kind: MiraCeph
custom resource as part of the deployment.
Once the Ceph cluster is deployed, the Ceph controller posts the
information required by the OpenStack services to be properly configured
as a kind: Secret
object into the openstack-ceph-shared
namespace.
The OpenStack controller watches this namespace. Once the corresponding
secret is created, the OpenStack controller transforms this secret to the
data structures expected by the OpenStack-Helm charts. Even if an OpenStack
installation is triggered at the same time as a Ceph cluster deployment, the
OpenStack controller halts the deployment of the OpenStack services that
depend on Ceph availability until the secret in the shared namespace is
created by the Ceph controller.
For the configuration of Ceph RADOS Gateway as an OpenStack Object
Storage, the reverse process takes place. The OpenStack controller waits
for the OpenStack-Helm to create a secret with OpenStack Identity
(Keystone) credentials that RADOS Gateway must use to validate the
OpenStack Identity tokens, and posts it back to the same
openstack-ceph-shared
namespace in the format suitable for
consumption by the Ceph controller. The Ceph controller then reads this
secret and reconfigures RADOS Gateway accordingly.
StackLight integration with OpenStack includes automatic discovery of RabbitMQ
credentials for notifications and OpenStack credentials for OpenStack API
metrics. For details, see the
openstack.rabbitmq.credentialsConfig
and
openstack.telegraf.credentialsConfig
parameters description in
MOS Operations Guide: StackLight configuration parameters.
The levels of integration between OpenStack and Tungsten Fabric (TF) include:
The integration between the OpenStack and TF controllers is
implemented through the shared Kubernetes openstack-tf-shared
namespace.
Both controllers have access to this namespace to read and write the Kubernetes
kind: Secret
objects.
The OpenStack controller posts the data into the openstack-tf-shared
namespace required by the TF services. The TF controller watches this
namespace. Once an appropriate secret is created, the TF controller obtains it
into the internal data structures for further processing.
The OpenStack controller includes the following data for the TF controller:
tunnel_inteface
Name of the network interface for the TF data plane. This interface is used by TF for the encapsulated traffic for overlay networks.
Keystone Administrator credentials and an up-and-running IAM service are required for the TF controller to initiate the deployment process.
Required for the TF vRrouter agent service.
Also, the OpenStack Controller watches the openstack-tf-shared
namespace
for the vrouter_port
parameter that defines the vRouter port number and
passes it to the nova-compute
pod.
The list of the OpenStack services that are integrated with TF through their API include:
neutron-server
- integration is provided by the
contrail-neutron-plugin
component that is used by the neutron-server
service for transformation of the API calls to the TF API compatible
requests.
nova-compute
- integration is provided by the
contrail-nova-vif-driver
and contrail-vrouter-api
packages used
by the nova-compute
service for interaction with the TF vRouter to
the network ports.
octavia-api
- integration is provided by the Octavia TF Driver that
enables you to use OpenStack CLI and Horizon for operations with load
balancers. See Tungsten Fabric integration with OpenStack Octavia for details.
Warning
TF is not integrated with the following OpenStack services:
DNS service (Designate)
Key management (Barbican)
Depending on the size of an OpenStack environment and the components that you use, you may want to have a single or multiple network interfaces, as well as run different types of traffic on a single or multiple VLANs.
This section provides the recommendations for planning the network configuration and optimizing the cloud performance.
The image below illustrates the recommended physical networks layout for a Mirantis OpenStack for Kubernetes (MOS) deployment with Ceph.
The image below illustrates the Ceph storage physical network layout.
When planning your OpenStack environment, consider what types of traffic your workloads generate and design your network accordingly. If you anticipate that certain types of traffic, such as storage replication, will likely consume a significant amount of network bandwidth, you may want to move that traffic to a dedicated network interface to avoid performance degradation.
A Mirantis OpenStack for Kubernetes (MOS) deployment typically requires the following networks:
Network |
Description |
---|---|
Common/PXE network |
The network used for the provisioning of bare metal servers. |
Management network |
The network used for managing of bare metal servers. |
Kubernetes workloads network |
The routable network for communication between containers in Kubernetes. |
Storage access network (Ceph) |
The network used for accessing the Ceph storage. We recommended that it is placed on a dedicated hardware interface. |
Storage replication network (Ceph) |
The network used for the storage replication (Ceph). We recommended that it is placed on a dedicated hardware interface to ensure low latency and fast access. |
External networks (MetalLB) |
The routable network used for external IP addresses of the Kubernetes LoadBalancer services managed by MetalLB. |
The MOS deployment additionally requires the following networks:
Service name |
Network |
Description |
---|---|---|
Networking |
Provider networks |
Typically, a routable network used to provide the external access to OpenStack instances (a floating network). Can be used by the OpenStack services such as Ironic, Manila, and others, to connect their management resources. |
Networking |
Overlay networks (virtual networks) |
The network used to provide denied, secure tenant networks with the help of the tunneling mechanism (VLAN/GRE/VXLAN). If the VXLAN and GRE encapsulation takes place, the IP address assignment is required on interfaces at the node level. |
Compute |
Live migration network |
The network used by the OpenStack compute service (Nova) to transfer data during live migration. Depending on the cloud needs, it can be placed on a dedicated physical network not to affect other networks during live migration. The IP address assignment is required on interfaces at the node level. |
The way of mapping of the logical networks described above to physical networks and interfaces on nodes depends on the cloud size and configuration. We recommend placing OpenStack networks on a dedicated physical interface (bond) that is not shared with storage and Kubernetes management network to minimize the influence on each other.
To improve the goodput, we recommend that you enable jumbo frames where possible. The jumbo frames have to be enabled on the whole path of the packets traverse. If one of the network components cannot handle jumbo frames, the network path uses the smallest MTU.
To provide fault tolerance of a single NIC, we recommend using the link aggregation, such as bonding. The link aggregation is useful for linear scaling of bandwidth, load balancing, and fault protection. Depending on the hardware equipment, different types of bonds might be supported. Use the multi-chassis link aggregation as it provides fault tolerance at the device level. For example, MLAG on Arista equipment or vPC on Cisco equipment.
The Linux kernel supports the following bonding modes:
active-backup
balance-xor
802.3ad
(LACP)
balance-tlb
balance-alb
Since LACP is the IEEE standard 802.3ad
supported by the majority of
network platforms, we recommend using this bonding mode.
Mirantis OpenStack for Kubernetes (MOS) uses Ceph as a distributed storage system for block and object storage. For more information, refer to Mirantis Container Cloud Reference Architecture: Storage.
StackLight is the logging, monitoring, and alerting solution that provides a single pane of glass for cloud maintenance and day-to-day operations as well as offers critical insights into cloud health including operational information about the components deployed with Mirantis OpenStack for Kubernetes (MOS). StackLight is based on Prometheus, an open-source monitoring solution and a time series database, and Elasticsearch, the logs and notifications storage.
Mirantis OpenStack for Kubernetes (MOS) deploys the StackLight stack as a release of a Helm chart that contains the helm-controller and HelmBundle custom resources. The StackLight HelmBundle consists of a set of Helm charts describing the StackLight components. Apart from the OpenStack-specific components below, StackLight also includes the components described in Mirantis Container Cloud Reference Architecture: Deployment architecture.
During the StackLight deployment, you can define the HA or non-HA StackLight architecture type. For details, see Mirantis Container Cloud Reference Architecture: StackLight database modes.
StackLight component |
Description |
---|---|
Prometheus native exporters and endpoints |
Export the existing metrics as Prometheus metrics and include
|
Telegraf OpenStack plugin |
Collects and processes the OpenStack metrics. |
StackLight measures, analyzes, and reports in a timely manner about failures that may occur in the following Mirantis OpenStack for Kubernetes (MOS) components and their sub-components. Apart from the components below, StackLight also monitors the components listed in Mirantis Container Cloud Reference Architecture: Monitored components.
libvirt
Memcached
MariaDB
NTP
OpenStack (Barbican, Cinder, Designate, Glance, Heat, Horizon, Ironic, Keystone, Neutron, Nova, Octavia)
Open vSwitch
RabbitMQ
OpenStack SSL certificates
See also
Tungsten Fabric provides basic L2/L3 networking to an OpenStack environment running on the MKE cluster and includes the IP address management, security groups, floating IP addresses, and routing policies functionality. Tungsten Fabric is based on overlay networking, where all virtual machines are connected to a virtual network with encapsulation (MPLSoGRE, MPLSoUDP, VXLAN). This enables you to separate the underlay Kubernetes management network. A workload requires an external gateway, such as a hardware EdgeRouter or a simple gateway to route the outgoing traffic.
The Tungsten Fabric vRouter uses different gateways for the control and data planes.
This section contains a summary of the Tungsten Fabric upstream features and use cases not supported in MOS, features and use cases offered as Technology Preview in the current product release if any, and known limitations of Tungsten Fabric in integration with other product components.
Feature or use case |
Status |
Description |
---|---|---|
Tungsten Fabric monitoring |
Not supported |
The integration between Tungsten Fabric and StackLight has not been implemented yet |
Automatic generation of network port records in DNSaaS (Designate) |
Not supported |
As a workaround, you can use the Tungsten Fabric built-in DNS service that enables virtual machines to resolve each other names |
Secret management (Barbican) |
Not supported |
It is not possible to use the certificates stored in Barbican to terminate HTTPs on a load balancer in a Tungsten Fabric deployment |
Role Based Access Control (RBAC) for Neutron objects |
Not supported |
|
Advanced Tungsten Fabric features |
Not supported |
Tungsten Fabric does not support the following upstream advanced features:
|
Technical Preview |
DPDK Available since MOS Ussuri Update |
See also
All services of Tungsten Fabric are delivered as separate containers, which are deployed by the Tungsten Fabric Operator (TFO). Each container has an INI-based configuration file that is available on the host system. The configuration file is generated automatically upon the container start and is based on environment variables provided by the TFO through Kubernetes ConfigMaps.
The main Tungsten Fabric containers run with the host
network as
DeploymentSet
, without using the Kubernetes networking layer. The services
listen directly on the host
network interface.
The following diagram describes the minimum production installation of Tungsten Fabric with a Mirantis OpenStack for Kubernetes (MOS) deployment.
This section describes the Tungsten Fabric services and their distribution across the Mirantis OpenStack for Kubernetes (MOS) deployment.
The Tungsten Fabric services run mostly as DaemonSets
in a separate
container for each service. The deployment and update processes are managed by
the Tungsten Fabric operator. However, Kubernetes manages the probe checks and
restart of broken containers.
The following tables describe the Tungsten Fabric services:
Configuration and control services in Tungsten Fabric controller containers
Tungsten Fabric plugin services on the OpenStack controller nodes
Service name |
Service description |
---|---|
|
Exposes a REST-based interface for the Tungsten Fabric API. |
|
Collects data of the Tungsten Fabric configuration processes and sends
it to the Tungsten Fabric |
|
Communicates with the cluster gateways using BGP and with the vRouter agents using XMPP, as well as redistributes appropriate networking information. |
|
Collects the Tungsten Fabric controller process data and sends
this information to the Tungsten Fabric |
|
Manages physical networking devices using |
|
Using the |
|
The customized Berkeley Internet Name Domain (BIND) daemon of
Tungsten Fabric that manages DNS zones for the |
|
Listens to configuration changes performed by a user and generates corresponding system configuration objects. In multi-node deployments, it works in the active-backup mode. |
|
Listens to configuration changes of |
|
Consists of the |
Service name |
Service description |
---|---|
|
Evaluates and manages the alarms rules. |
|
Provides a REST API to interact with the Cassandra analytics database. |
|
Collects all Tungsten Fabric analytics process data and sends
this information to the Tungsten Fabric |
|
Provisions the init model if needed. Collects data of the |
|
Collects and analyzes data from all Tungsten Fabric services. |
|
Handles the queries to access data from the Cassandra database. |
|
Receives the authorization and configuration of the physical routers
from the |
|
Reads the SNMP information from the physical router user-visible entities (UVEs), creates a neighbor list, and writes the neighbor information to the physical router UVEs. The Tungsten Fabric web UI uses the neighbor list to display the physical topology. |
Service name |
Service description |
---|---|
|
Connects to the Tungsten Fabric controller container and the Tungsten Fabric DNS system using the Extensible Messaging and Presence Protocol (XMPP). |
|
Collects the supervisor |
Service name |
Service description |
---|---|
|
|
|
The Kubernetes operator that enables the Cassandra clusters creation and management. |
|
Handles the messaging bus and generates alarms across the Tungsten Fabric analytics containers. |
|
The Kubernetes operator that enabels Kafka clusters creation and management. |
|
Stores the physical router UVE storage and serves as a messaging bus for event notifications. |
|
The Kubernetes operator that enables Redis clusters creation and management. |
|
Holds the active-backup status for the |
|
The Kubernetes operator that enables ZooKeeper clusters creation and management. |
|
Exchanges messages between API servers and original request senders. |
|
The Kubernetes operator that enables RabbitMQ clusters creation and management. |
Service name |
Service description |
---|---|
|
The Neutron server that includes the Tungsten Fabric plugin. |
|
The Octavia API that includes the Tungsten Fabric Octavia driver. |
The Tungsten Fabric operator (TFO) is based on the operator SDK project. The operator SDK is a framework that uses the controller-runtime library to make writing operators easier by providing:
High-level APIs and abstractions to write the operational logic more intuitively.
Tools for scaffolding and code generation to bootstrap a new project fast.
Extensions to cover common operator use cases.
The TFO deploys the following sub-operators. Each sub-operator handles a separate part of a TF deployment:
Network |
Description |
---|---|
TFControl |
Deploys the Tungsten Fabric control services, such as:
|
TFConfig |
Deploys the Tungsten Fabric configuration services, such as:
|
TFAnalytics |
Deploys the Tungsten Fabric analytics services, such as:
|
TFVrouter |
Deploys a vRouter on each compute node with the following services:
|
TFWebUI |
Deploys the following web UI services:
|
TFTool |
Deploys the following tools to verify the TF deployment status:
|
TFTest |
An operator to run Tempest tests. |
Besides the sub-operators that deploy TF services, TFO uses operators to deploy and maintain third-party services, such as different types of storage, cache, message system, and so on. The following table describes all third-party operators:
Network |
Description |
---|---|
casandra-operator |
An upstream operator that automates the Cassandra HA storage operations for the configuration and analytics data. |
zookeeper-operator |
An upstream operator for deployment and automation of a ZooKeeper cluster. |
kafka-operator |
An operator for the Kafka cluster used by analytics services. |
redis-operator |
An upstream operator that automates the Redis cluster deployment and keeps it healthy. |
rabbitmq-operator |
An operator for the messaging system based on RabbitMQ. |
The following diagram illustrates a simplified TFO workflow:
This section describes the types of traffic and traffic flow directions in a Mirantis OpenStack for Kubernetes (MOS) cluster.
The following diagram illustrates all types of UI and API traffic in a MOS cluster, including the monitoring and OpenStack API traffic. The OpenStack Dashboard pod hosts Horizon and acts as a proxy for all other types of traffic. TLS termination is also performed for this type of traffic.
SDN or Tungsten Fabric traffic goes through the overlay Data network and processes east-west and north-south traffic for applications that run in a MOS cluster. This network segment typically contains tenant networks as separate MPLS-over-GRE and MPLS-over-UDP tunnels. The traffic load depends on the workload.
The control traffic between the Tungsten Fabric controllers, edge routers, and vRouters uses the XMPP with TLS and iBGP protocols. Both protocols produce low traffic that does not affect MPLS over GRE and MPLS over UDP traffic. However, this traffic is critical and must be reliably delivered. Mirantis recommends configuring higher QoS for this type of traffic.
The following diagram displays both MPLS over GRE/MPLS over UDP and iBGP and XMPP traffic examples in a MOS cluster:
The Tungsten Fabric vRouter provides data forwarding to an OpenStack tenant instance and reports statistics to the Tungsten Fabric analytics service. The Tungsten Fabric vRouter is installed on all OpenStack compute nodes. Mirantis OpenStack for Kubernetes (MOS) supports the kernel-based deployment of the Tungsten Fabric vRouter.
The vRouter agent acts as a local control plane. Each Tungsten Fabric vRouter agent is connected to at least two Tungsten Fabric controllers in an active-active redundancy mode. The Tungsten Fabric vRouter agent is responsible for all networking-related functions including routing instances, routes, and so on.
The Tungsten Fabric vRouter uses different gateways for the control and data planes. For example, the Linux system gateway is located on the management network, and the Tungsten Fabric gateway is located on the data plane network.
The following diagram illustrates the Tungsten Fabric kernel vRouter setup by the TF operator:
On the diagram above, the following types of networks interfaces are used:
eth0
- for the management (PXE) network (eth1
and eth2
are the
slave interfaces of Bond0
)
Bond0.x
- for the MKE control plane network
Bond0.y
- for the MKE data plane network
MOS ensures Octavia with Tungsten Fabric integration by OpenStack Octavia Driver with Tungsten Fabric HAProxy as a back end.
Octavia Tungsten Fabric Driver supports creation, update, and deletion operations with the following entities:
Load balancers
Note
For a load balancer creation operation, the driver supports
only the vip-subnet-id
argument, the vip-network-id
argument is
not supported.
Listeners
Pools
Health monitors
Octavia Tungsten Fabric Driver does not support the following functionality:
L7 load balancing capabilities, such as L7 policies, L7 rules, and others
Setting specific availability zones for load balancers and their resources
Using of the UDP protocol
Operations with Octavia quotas
Operations with Octavia flavors
Warning
Octavia Tungsten Fabric Driver enables you to manage the load balancer resources through the OpenStack CLI or OpenStack Horizon. Do not perform any operations on the load balancer resources through the Tungsten Fabric web UI because in this case the changes will not be reflected on the OpenStack side.