OsDpl standard configuration¶
The detailed information about schema of an OpenStackDeployment
(OsDpl)
custom resource can be obtained by running:
kubectl get crd openstackdeployments.lcm.mirantis.com -oyaml
The definition of a particular OpenStack deployment can be obtained by running:
kubectl -n openstack get osdpl -oyaml
Example of an OsDpl CR of minimum configuration:
apiVersion: lcm.mirantis.com/v1alpha1
kind: OpenStackDeployment
metadata:
name: openstack-cluster
namespace: openstack
spec:
openstack_version: victoria
preset: compute
size: tiny
internal_domain_name: cluster.local
public_domain_name: it.just.works
features:
ssl:
public_endpoints:
api_cert: |-
The public key certificate of the OpenStack public endpoints followed by
the certificates of any intermediate certificate authorities which
establishes a chain of trust up to the root CA certificate.
api_key: |-
The private key of the certificate for the OpenStack public endpoints.
This key must match the public key used in the api_cert.
ca_cert: |-
The public key certificate of the root certificate authority.
If you do not have one, use the top-most intermediate certificate instead.
neutron:
tunnel_interface: ens3
external_networks:
- physnet: physnet1
interface: veth-phy
bridge: br-ex
network_types:
- flat
vlan_ranges: null
mtu: null
floating_network:
enabled: False
nova:
live_migration_interface: ens3
images:
backend: local
For the detailed description of the OsDpl main elements, see sections below:
Main OsDpl elements¶
apiVersion
¶
Specifies the version of the Kubernetes API that is used to create this object.
kind
¶
Specifies the kind of the object.
metadata:name
¶
Specifies the name of metadata. Should be set in compliance with the Kubernetes resource naming limitations.
metadata:namespace
¶
Specifies the metadata namespace. While technically it is possible to
deploy OpenStack on top of Kubernetes in other than openstack
namespace, such configuration is not included in the MOSK
system integration test plans. Therefore, we do not recommend such scenario.
Warning
Both OpenStack and Kubernetes platforms provide resources to applications. When OpenStack is running on top of Kubernetes, Kubernetes is completely unaware of OpenStack-native workloads, such as virtual machines, for example.
For better results and stability, Mirantis recommends using a dedicated Kubernetes cluster for OpenStack, so that OpenStack and auxiliary services, Ceph, and StackLight are the only Kubernetes applications running in the cluster.
spec
¶
Contains the data that defines the OpenStack deployment and configuration. It has both high-level and low-level sections.
The very basic values that must be provided include:
spec:
openstack_version:
preset:
size:
public_domain_name:
For the detailed description of the spec
subelements, see
Spec OsDpl elements.
Spec OsDpl elements¶
openstack_version
¶
Specifies the OpenStack release to deploy.
preset
¶
String that specifies the name of the preset
, a predefined
configuration for the OpenStack cluster. A preset includes:
A set of enabled services that includes virtualization, bare metal management, secret management, and others
Major features provided by the services, such as VXLAN encapsulation of the tenant traffic
Integration of services
Every supported deployment profile incorporates an OpenStack preset. Refer to Deployment profiles for the list of possible values.
size
¶
String that specifies the size category for the OpenStack cluster. The size category defines the internal configuration of the cluster such as the number of replicas for service workers and timeouts, etc.
The list of supported sizes include:
tiny
- for approximately 10 OpenStack compute nodessmall
- for approximately 50 OpenStack compute nodesmedium
- for approximately 100 OpenStack compute nodes
public_domain_name
¶
Specifies the public DNS name for OpenStack services. This is a base DNS name that must be accessible and resolvable by API clients of your OpenStack cloud. It will be present in the OpenStack endpoints as presented by the OpenStack Identity service catalog.
The TLS certificates used by the OpenStack services (see below) must also be issued to this DNS name.
persistent_volume_storage_class
¶
Specifies the Kubernetes storage class name used for services to create
persistent volumes. For example, backups of MariaDB. If not specified,
the storage class marked as default
will be used.
features
¶
Contains the top-level collections of settings for the OpenStack deployment that potentially target several OpenStack services. The section where the customizations should take place.
features:services
¶
Contains a list of extra OpenStack services to deploy. Extra OpenStack
services are services that are not included into preset
.
features:services:object-storage
¶
Available since MOSK Ussuri Update
Enables the object storage and provides a RADOS Gateway Swift API that is
compatible with the OpenStack Swift API. To enable the service, add
object-storage
to the service list:
spec:
features:
services:
- object-storage
To create the RADOS Gateway pool in Ceph, see Container Cloud Operations Guide: Enable Ceph RGW Object Storage.
features:services:instance-ha
¶
Available since MOSK 21.2 TechPreview
Enables Masakari, the OpenStack service that ensures high availability of
instances running on a host. To enable the service, add instance-ha
to the
service list:
spec:
features:
services:
- instance-ha
features:services:tempest
¶
Enables tests against a deployed OpenStack cloud:
spec:
features:
services:
- tempest
features:ssl
¶
Contains the content of SSL/TLS certificates (server, key, CA bundle) used to enable a secure communication to public OpenStack API services.
These certificates must be issued to the DNS domain specified in the public_domain_name field.
features:neutron:tunnel_interface
¶
Defines the name of the NIC device on the actual host that will be used for Neutron.
We recommend setting up your Kubernetes hosts in such a way that networking is configured identically on all of them, and names of the interfaces serving the same purpose or plugged into the same network are consistent across all physical nodes.
features:neutron:dns_servers
¶
Defines the list of IPs of DNS servers that are accessible from virtual networks. Used as default DNS servers for VMs.
features:neutron:external_networks
¶
Contains the data structure that defines external (provider) networks on top of which the Neutron networking will be created.
features:neutron:floating_network
¶
If enabled, must contain the data structure defining the floating IP network that will be created for Neutron to provide external access to your Nova instances.
features:nova:live_migration_interface
¶
Specifies the name of the NIC device on the actual host that will be used by Nova for the live migration of instances.
We recommend setting up your Kubernetes hosts in such a way that networking is configured identically on all of them, and names of the interfaces serving the same purpose or plugged into the same network are consistent across all physical nodes.
Also, set the option to vhost0
in the following cases:
The Neutron service uses Tungsten Fabric.
Nova migrates instances through the interface specified by the Neutron’s
tunnel_interface
parameter.
features:nova:images:backend
¶
Defines the type of storage for Nova to use on the compute hosts for the images that back up the instances.
The list of supported options include:
local
- the local storage is used. The pros include faster operation, failure domain independency from the external storage. The cons include local space consumption and less performant and robust live migration with block migration.ceph
- instance images are stored in a Ceph pool shared across all Nova hypervisors. The pros include faster image start, faster and more robust live migration. The cons include considerably slower IO performance, workload operations direct dependency on Ceph cluster availability and performance.lvm
Available since MOS 21.2, TechPreview - instance images and ephemeral images are stored on a local Logical Volume. If specified,features:nova:images:lvm:volume_group
must be set to an available LVM Volume Group, by default,nova-vol
. For details, see Enable LVM ephemeral storage.
features:barbican:backends:vault
¶
Specifies the object containing the Vault parameters to connect to Barbican. The list of supported options includes:
enabled
- boolean parameter indicating that the Vault back end is enabled.approle_role_id
- Vault app role ID.approle_secret_id
- secret ID created for the app role.vault_url
- URL of the Vault server.use_ssl
- enables the SSL encryption. Since MOSK does not currently support the Vault SSL encryption, theuse_ssl
parameter should be set tofalse
.kv_mountpoint
TechPreview - optional, specifies the mountpoint of a Key-Value store in Vault to use.namespace
TechPreview - optional, specifies the Vault namespace to use with all requests to Vault.Note
The Vault namespaces feature is available only in Vault Enterprise.
Note
Vault namespaces are supported only starting from the OpenStack Victoria release.
If the Vault back end is used, configure it properly using the following parameters:
spec:
features:
barbican:
backends:
vault:
enabled: true
approle_role_id: <APPROLE_ROLE_ID>
approle_secret_id: <APPROLE_SECRET_ID>
vault_url: <VAULT_SERVER_URL>
use_ssl: false
Note
Since MOSK does not currently support the Vault
SSL encryption, set the use_ssl
parameter to false
.
features:keystone:keycloak
¶
Defines parameters to connect to the Keycloak identity provider. For details, see Integration with Identity Access Management (IAM).
features:keystone:domain_specific_configuration
¶
Defines the domain-specific configuration and is useful for integration
with LDAP. An example of OsDpl with LDAP integration, which will create
a separate domain.with.ldap
domain and configure it to use LDAP as
an identity driver:
spec:
features:
keystone:
domain_specific_configuration:
enabled: true
domains:
- name: domain.with.ldap
enabled: true
config:
assignment:
driver: keystone.assignment.backends.sql.Assignment
identity:
driver: ldap
ldap:
chase_referrals: false
group_desc_attribute: description
group_id_attribute: cn
group_member_attribute: member
group_name_attribute: ou
group_objectclass: groupOfNames
page_size: 0
password: XXXXXXXXX
query_scope: sub
suffix: dc=mydomain,dc=com
url: ldap://ldap01.mydomain.com,ldap://ldap02.mydomain.com
user: uid=openstack,ou=people,o=mydomain,dc=com
user_enabled_attribute: enabled
user_enabled_default: false
user_enabled_invert: true
user_enabled_mask: 0
user_id_attribute: uid
user_mail_attribute: mail
user_name_attribute: uid
user_objectclass: inetOrgPerson
features:telemetry:mode
¶
The information about Telemetry has been amended and updated and is now
published in the Telemetry services section. The feature is set to
autoscaling
by default.
features:logging
¶
Specifies the standard logging levels for OpenStack services that
include the following, at increasing severity: TRACE
, DEBUG
,
INFO
, AUDIT
, WARNING
, ERROR
, and CRITICAL
.
For example:
spec:
features:
logging:
nova:
level: DEBUG
features:horizon:themes
¶
Available since MOSK Ussuri Update
Defines the list of custom OpenStack Dashboard themes. Content of the archive file with a theme depends on the level of customization and can include static files, Django templates, and other artifacts. For the details, refer to OpenStack official documentation: Customizing Horizon Themes.
spec:
features:
horizon:
themes:
- name: theme_name
description: The brand new theme
url: https://<path to .tgz file with the contents of custom theme>
sha256summ: <SHA256 checksum of the archive above>
features:policies
¶
Available since MOSK 21.4
Defines the list of custom policies for OpenStack services.
Structure example:
spec:
features:
policies:
nova:
custom_policy: custom_value
The list of services available for configuration includes: Cinder, Nova, Designate, Keystone, Glance, Neutron, Heat, Octavia, Barbican, Placement, Ironic, aodh, Panko, Gnocchi, and Masakari.
Caution
Mirantis is not responsible for cloud operability in case of default policies modifications but provides API to pass the required configuration to the core OpenStack services.
features:database:cleanup
¶
Available since MOSK 21.6
Defines the cleanup of the databases stale entries that are marked by OpenStack services as deleted. The scripts run on a periodic basis as cron jobs. By default, the databases entries older than 30 days are cleaned each Monday as per the following schedule:
Service |
Server time |
---|---|
Cinder |
12:01 a.m. |
Nova |
01:01 a.m. |
Glance |
02:01 a.m. |
Masakari |
03:01 a.m. |
Barbican |
04:01 a.m. |
Heat |
05:01 a.m. |
The list of services available for configuration includes: Barbican, Cinder, Glance, Heat, Masakari, and Nova.
Structure example:
spec:
features:
database:
cleanup:
<os-service>:
enabled:
schedule:
age: 30
batch: 1000
artifacts
¶
A low-level section that defines the base URI prefixes for images and binary artifacts.
common
¶
A low-level section that defines values that will be passed to all
OpenStack (spec:common:openstack
) or auxiliary
(spec:common:infra
) services Helm charts.
Structure example:
spec:
artifacts:
common:
openstack:
values:
infra:
values:
services
¶
A section of the lowest level, enables the definition of specific values to pass to specific Helm charts on a one-by-one basis:
Warning
Mirantis does not recommend changing the default settings for
spec:artifacts
, spec:common
, and spec:services
elements.
Customizations can compromise the OpenStack deployment update and upgrade
processes.
However, you may need to edit the spec:services
section to limit
hardware resources in case of a hyperconverged architecture as described in
Limit HW resources for hyperconverged OpenStack compute nodes.
Status OsDpl elements Removed¶
This feature has been removed in MOSK 22.1 in favor of the
OpenStackDeploymentStatus
(OsDplSt) custom resource.