This section instructs you on how to deploy OpenStack on top of Kubernetes
using the OpenStack Controller and openstackdeployments.lcm.mirantis.com
(OsDpl) CR.
To deploy an OpenStack cluster:
Verify that you have pre-configured the networking according to MOS Reference Architecture: Networking.
Verify that the TLS certificates that will be required for the OpenStack cluster deployment have been pre-generated.
Note
The Transport Layer Security (TLS) protocol is mandatory on public endpoints.
Caution
To avoid certificates renewal with subsequent OpenStack
updates during which additional services with new public endpoints may
appear, we recommend using wildcard SSL certificates for public
endpoints. For example, *.it.just.works
, where it.just.works
is
a cluster public domain.
The sample code block below illustrates how to generate a self-signed
certificate for the it.just.works
domain. The procedure presumes
the cfssl and cfssljson tools are installed on the
machine.
mkdir cert && cd cert
tee ca-config.json << EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "8760h"
}
}
}
}
EOF
tee ca-csr.json << EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names":[{
"C": "<country>",
"ST": "<state>",
"L": "<city>",
"O": "<organization>",
"OU": "<organization unit>"
}]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
tee server-csr.json << EOF
{
"CN": "*.it.just.works",
"hosts": [
"*.it.just.works"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [ {
"C": "US",
"L": "CA",
"ST": "San Francisco"
}]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem --config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
Create the openstackdeployment.yaml
file that will include the
OpenStack cluster deployment configuration.
Note
The resource of kind OpenStackDeployment
(OsDpl) is a custom
resource defined by a resource of kind
CustomResourceDefinition
. The resource is validated with
the help of the OpenAPI v3 schema.
Configure the OsDpl resource depending on the needs of your deployment. For the configuration details, refer to MOS Reference Architecture: OpenStackDeployment resource.
Example of an OsDpl CR of minimum configuration:
apiVersion: lcm.mirantis.com/v1alpha1
kind: OpenStackDeployment
metadata:
name: openstack-cluster
namespace: openstack
spec:
openstack_version: ussuri
preset: compute
size: tiny
internal_domain_name: cluster.local
public_domain_name: it.just.works
features:
ssl:
public_endpoints:
api_cert: |-
# Update server certificate content
api_key: |-
# Update server private key content
ca_cert: |-
# Update CA certificate content
neutron:
tunnel_interface: ens3
external_networks:
- physnet: physnet1
interface: veth-phy
bridge: br-ex
network_types:
- flat
vlan_ranges: null
mtu: null
floating_network:
enabled: False
nova:
live_migration_interface: ens3
images:
backend: local
If required, enable DPDK, huge pages, and other supported Telco features as described in Advanced OpenStack configuration (optional).
To the openstackdeployment
object, add information about the TLS
certificates:
ssl:public_endpoints:ca_cert
- CA certificate content (ca.pem
)
ssl:public_endpoints:api_cert
- server certificate content
(server.pem
)
ssl:public_endpoints:api_key
- server private key (server-key.pem
)
Verify that the Load Balancer network does not overlap your corporate
or internal Kubernetes networks, for example, Calico IP pools. Also,
verify that the pool of Load Balancer network is big enough to provide
IP addresses for all Amphora VMs (loadbalancers
).
If required, reconfigure the Octavia network settings using the following sample structure:
spec:
services:
load-balancer:
octavia:
values:
octavia:
settings:
lbmgmt_cidr: "10.255.0.0/16"
lbmgmt_subnet_start: "10.255.1.0"
lbmgmt_subnet_end: "10.255.255.254"
Trigger the OpenStack deployment:
kubectl apply -f openstackdeployment.yaml
Monitor the status of your OpenStack deployment:
kubectl -n openstack get pods
kubectl -n openstack describe osdpl osh-dev
Assess the current status of the OpenStack deployment using the
status
section output in the OsDpl resource:
Get the OsDpl YAML file:
kubectl -n openstack get osdpl osh-dev -o yaml
Analyze the status
output using the detailed description in
MOS Reference Architecture: OpenStackDeployment resource:
The Status elements.
Verify that the OpenStack cluster has been deployed:
clinet_pod_name=$(kubectl -n openstack get pods -l application=keystone,component=client | grep keystone-client | head -1 | awk '{print $1}')
kubectl -n openstack exec -it $clinet_pod_name -- openstack service list
Example of a positive system response:
+----------------------------------+---------------+----------------+
| ID | Name | Type |
+----------------------------------+---------------+----------------+
| 159f5c7e59784179b589f933bf9fc6b0 | cinderv3 | volumev3 |
| 6ad762f04eb64a31a9567c1c3e5a53b4 | keystone | identity |
| 7e265e0f37e34971959ce2dd9eafb5dc | heat | orchestration |
| 8bc263babe9944cdb51e3b5981a0096b | nova | compute |
| 9571a49d1fdd4a9f9e33972751125f3f | placement | placement |
| a3f9b25b7447436b85158946ca1c15e2 | neutron | network |
| af20129d67a14cadbe8d33ebe4b147a8 | heat-cfn | cloudformation |
| b00b5ad18c324ac9b1c83d7eb58c76f5 | radosgw-swift | object-store |
| b28217da1116498fa70e5b8d1b1457e5 | cinderv2 | volumev2 |
| e601c0749ce5425c8efb789278656dd4 | glance | image |
+----------------------------------+---------------+----------------+