This documentation provides information on how to deploy and operate Mirantis Container Cloud.
The documentation is intended to help operators understand the core concepts of the product.
The information provided in this documentation set is being constantly improved and amended based on the feedback and kind requests from our software consumers. This documentation set outlines description of the features that are supported within two latest Cloud Container minor releases, with a corresponding note Available since release.
The following table lists the guides included in the documentation set you are reading:
Guide |
Purpose |
---|---|
Reference Architecture |
Learn the fundamentals of Container Cloud reference architecture to plan your deployment. |
Deployment Guide |
Deploy Container Cloud of a preferred configuration using supported deployment profiles tailored to the demands of specific business cases. |
Operations Guide |
Deploy and operate the Container Cloud managed clusters. |
Release Compatibility Matrix |
Deployment compatibility of the Container Cloud components versions for each product release. |
Release Notes |
Learn about new features and bug fixes in the current Container Cloud version as well as in the Container Cloud minor releases. |
For your convenience, we provide all guides from this documentation set in HTML (default), single-page HTML, PDF, and ePUB formats. To use the preferred format of a guide, select the required option from the Formats menu next to the guide title on the Container Cloud documentation home page.
This documentation assumes that the reader is familiar with network and cloud concepts and is intended for the following users:
Infrastructure Operator
Is member of the IT operations team
Has working knowledge of Linux, virtualization, Kubernetes API and CLI, and OpenStack to support the application development team
Accesses Mirantis Container Cloud and Kubernetes through a local machine or web UI
Provides verified artifacts through a central repository to the Tenant DevOps engineers
Tenant DevOps engineer
Is member of the application development team and reports to line-of-business (LOB)
Has working knowledge of Linux, virtualization, Kubernetes API and CLI to support application owners
Accesses Container Cloud and Kubernetes through a local machine or web UI
Consumes artifacts from a central repository approved by the Infrastructure Operator
This documentation set uses the following conventions in the HTML format:
Convention |
Description |
---|---|
boldface font |
Inline CLI tools and commands, titles of the procedures and system response examples, table titles. |
|
Files names and paths, Helm charts parameters and their values, names of packages, nodes names and labels, and so on. |
italic font |
Information that distinguishes some concept or term. |
External links and cross-references, footnotes. |
|
Main menu > menu item |
GUI elements that include any part of interactive user interface and menu navigation. |
Superscript |
Some extra, brief information. For example, if a feature is available from a specific release or if a feature is in the Technology Preview development stage. |
Note The Note block |
Messages of a generic meaning that may be useful to the user. |
Caution The Caution block |
Information that prevents a user from mistakes and undesirable consequences when following the procedures. |
Warning The Warning block |
Messages that include details that can be easily missed, but should not be ignored by the user and are valuable before proceeding. |
See also The See also block |
List of references that may be helpful for understanding of some related tools, concepts, and so on. |
Learn more The Learn more block |
Used in the Release Notes to wrap a list of internal references to the reference architecture, deployment and operation procedures specific to a newly implemented product feature. |
This documentation set includes description of the Technology Preview features. A Technology Preview feature provides early access to upcoming product innovations, allowing customers to experience the functionality and provide feedback during the development process. Technology Preview features may be privately or publicly available and neither are intended for production use. While Mirantis will provide support for such features through official channels, normal Service Level Agreements do not apply. Customers may be supported by Mirantis Customer Support or Mirantis Field Support.
As Mirantis considers making future iterations of Technology Preview features generally available, we will attempt to resolve any issues that customers experience when using these features.
During the development of a Technology Preview feature, additional components may become available to the public for testing. Because Technology Preview features are being under development, Mirantis cannot guarantee the stability of such features. As a result, if you are using Technology Preview features, you may not be able to seamlessly upgrade to subsequent releases of that feature. Mirantis makes no guarantees that Technology Preview features will be graduated to a generally available product release.
The Mirantis Customer Success Organization may create bug reports on behalf of support cases filed by customers. These bug reports will then be forwarded to the Mirantis Product team for possible inclusion in a future release.
The documentation set refers to Mirantis Container Cloud GA as to the latest released GA version of the product. For details about the Container Cloud GA minor releases dates, refer to Container Cloud releases.
Mirantis Container Cloud enables you to create, scale, and upgrade Kubernetes clusters on demand through a declarative API with a centralized identity and access management.
Container Cloud is installed once to deploy the management cluster. The management cluster is deployed through the bootstrap procedure on either the OpenStack, AWS, or bare metal provider. StackLight installs on both types of the clusters, management and managed, to provide metrics for each cluster separately. The baremetal-based deployment includes Ceph as a distributed storage system.
This section describes how to bootstrap a baremetal-based Mirantis Container Cloud management cluster.
The bare metal management system enables the Infrastructure Operator to deploy Mirantis Container Cloud on a set of bare metal servers. It also enables Container Cloud to deploy managed clusters on bare metal servers without a pre-provisioned operating system.
The Infrastructure Operator performs the following steps to install Container Cloud in a bare metal environment:
Install and connect hardware servers as described in Reference Architecture: Baremetal-based Container Cloud cluster.
Caution
The baremetal-based Container Cloud does not manage the underlay networking fabric but requires specific network configuration to operate.
Install Ubuntu 18.04 on one of the bare metal machines to create a seed node and copy the bootstrap tarball to this node.
Obtain the Mirantis license file that will be required during the bootstrap.
Create the deployment configuration files that include the bare metal hosts metadata.
Validate the deployment templates using fast preflight.
Run the bootstrap script for the fully automated installation of the management cluster onto the selected bare metal hosts.
Using the bootstrap script, the Container Cloud bare metal management system prepares the seed node for the management cluster and starts the deployment of Container Cloud itself. The bootstrap script performs all necessary operations to perform the automated management cluster setup. The deployment diagram below illustrates the bootstrap workflow of a baremetal-based management cluster.
This section describes how to prepare and bootstrap a baremetal-based management cluster. The procedure includes:
A runbook that describes how to create a seed node that is a temporary server used to run the management cluster bootstrap scripts.
A step-by-step instruction how to prepare metadata for the bootstrap scripts and how to run them.
Before installing Mirantis Container Cloud on a bare metal environment, complete the following preparation steps:
Verify that the hardware allocated for the installation meets the minimal requirements described in Reference Architecture: Requirements for a baremetal-based Container Cloud.
Install basic Ubuntu 18.04 server using standard installation images of the operating system on the bare metal seed node.
Log in to the seed node that is running Ubuntu 18.04.
Create a virtual bridge to connect to your PXE network on the
seed node. Use the following netplan
-based configuration file
as an example:
# cat /etc/netplan/config.yaml
network:
version: 2
renderer: networkd
ethernets:
ens3:
dhcp4: false
dhcp6: false
bridges:
br0:
addresses:
# Please, adjust for your environment
- 10.0.0.15/24
dhcp4: false
dhcp6: false
# Please, adjust for your environment
gateway4: 10.0.0.1
interfaces:
# Interface name may be different in your environment
- ens3
nameservers:
addresses:
# Please, adjust for your environment
- 8.8.8.8
parameters:
forward-delay: 4
stp: false
Apply the new network configuration using netplan
:
sudo netplan apply
Verify the new network configuration:
sudo brctl show
Example of system response:
bridge name bridge id STP enabled interfaces
br0 8000.fa163e72f146 no ens3
Verify that the interface connected to the PXE network belongs to the previously configured bridge.
Install the current Docker version available for Ubuntu 18.04:
sudo apt install docker.io
Verify that your logged USER
has access to the Docker daemon:
sudo usermod -aG docker $USER
Log out and log in again to the seed node to apply the changes.
Verify that Docker is configured correctly and has access to Container Cloud CDN. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://binary.mirantis.com"
The system output must contain a json
file with no error messages.
In case of errors, follow the steps provided in Troubleshooting.
Proceed with Verify the seed node.
Before you proceed to bootstrapping the management cluster on bare metal, perform the following steps:
Verify that the seed node has direct access to the Baseboard Management
Controller (BMC) of each baremetal host. All target hardware nodes must
be in the power off
state.
For example, using the IPMI tool:
ipmitool -I lanplus -H 'IPMI IP' -U 'IPMI Login' -P 'IPMI password' \
chassis power status
Example of system response:
Chassis Power is off
Verify that you configured each bare metal host as follows:
Enable the boot NIC support for UEFI load. Usually, at least the built-in network interfaces support it.
Enable the UEFI-LAN-OPROM support in BIOS -> Advanced -> PCIPCIe.
Enable the IPv4-PXE stack.
Set the following boot order:
UEFI-DISK
UEFI-PXE
If your PXE network is not configured to use the first network interface, fix the UEFI-PXE boot order to speed up node discovering by selecting only one required network interface.
Power off all bare metal hosts.
Warning
Only one Ethernet port on a host must be connected to the
Common/PXE network at any given time. The physical address
(MAC) of this interface must be noted and used to configure
the BareMetalHost
object describing the host.
Proceed with Prepare metadata and deploy the management cluster.
Using the example procedure below, replace the addresses and credentials in the configuration YAML files with the data from your environment. Keep everything else as is, including the file names and YAML structure.
The overall network mapping scheme with all L2 parameters, for example,
for a single 10.0.0.0/24
network, is described in the following table.
The configuration of each parameter indicated in this table is described
in the steps below.
Deployment file name |
Parameters and values |
---|---|
|
|
|
|
|
|
Log in to the seed node that you configured as described in Prepare the seed node.
Change to your preferred work directory, for example, your home directory:
cd $HOME
Download and run the Container Cloud bootstrap script to this directory:
wget https://binary.mirantis.com/releases/get_container_cloud.sh
chmod 0755 get_container_cloud.sh
./get_container_cloud.sh
Change the directory to the kaas-bootstrap
folder
created by the get_container_cloud.sh
script:
cd kaas-bootstrap
Obtain your license file that will be required during the bootstrap. See step 3 in Getting Started with Mirantis Container Cloud.
Save the license file as mirantis.lic
under the kaas-bootstrap
directory.
Create a copy of the current templates
directory for future reference.
mkdir templates.backup
cp -r templates/* templates.backup/
Update the cluster definition template in
templates/bm/cluster.yaml.template
according to the environment configuration. Use the table below.
Manually set all parameters that start with SET_
. For example,
SET_METALLB_ADDR_POOL
.
Parameter |
Description |
Example value |
---|---|---|
|
The IP address of the externally accessible API endpoint
of the management cluster. This address must NOT be
within the |
|
|
The IP range to be used as external load balancers for the Kubernetes
services with the |
|
Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In templates/bm/cluster.yaml.template
, add the ntp:servers
section
with the list of required servers names:
spec:
...
providerSpec:
value:
kaas:
...
regional:
- helmReleases:
- name: baremetal-provider
values:
config:
lcm:
...
ntp:
servers:
- 0.pool.ntp.org
...
Inspect the default bare metal host profile definition in
templates/bm/baremetalhostprofiles.yaml.template
.
If your hardware configuration differs from the reference,
adjust the default profile to match. For details, see
Customize the default bare metal host profile.
Update the bare metal hosts definition template in
templates/bm/baremetalhosts.yaml.template
according to the environment configuration. Use the table below.
Manually set all parameters that start with SET_
.
Parameter |
Description |
Example value |
---|---|---|
|
The IPMI user name in the base64 encoding to access the BMC. 0 |
|
|
The IPMI password in the base64 encoding to access the BMC. 0 |
|
|
The MAC address of the first management master node in the PXE network. |
|
|
The IP address of the BMC endpoint for the first master node in the management cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway. |
|
|
The IPMI user name in the base64 encoding to access the BMC. 0 |
|
|
The IPMI password in the base64 encoding to access the BMC. 0 |
|
|
The MAC address of the second management master node in the PXE network. |
|
|
The IP address of the BMC endpoint for the second master node in the management cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway. |
|
|
The IPMI user name in the base64 encoding to access the BMC. 0 |
|
|
The IPMI password in the base64 encoding to access the BMC. 0 |
|
|
The MAC address of the third management master node in the PXE network. |
|
|
The IP address of the BMC endpoint for the third master node in the management cluster. Must be an address from the OOB network that is accessible through the PXE network default gateway. |
|
Update the IP address pools definition template in
templates/bm/ipam-objects.yaml.template
according to the environment configuration. Use the table below.
Manually set all parameters that start with SET_
.
For example, SET_IPAM_POOL_RANGE
.
Parameter |
Description |
Example value |
---|---|---|
|
The address of PXE network in CIDR notation.
Must be minimum in the |
|
|
The default gateway in the PXE network. Since this is the only network that Container Cloud will use, this gateway must provide access to:
|
|
|
An external (non-Kubernetes) DNS server accessible from the PXE network. This server will be used by the bare metal hosts in all Container Cloud clusters. |
|
|
This pool range includes addresses that will be allocated to bare metal hosts in all Container Cloud clusters. The size of this range limits the number of hosts that can be deployed by the instance of Container Cloud. |
|
|
The IP address of the externally accessible API endpoint
of the management cluster. This address must NOT be
within the |
|
|
The IP range to be used as external load balancers for the Kubernetes
services with the |
|
Optional. Skip this step to use the default password password
in the Container Cloud web UI.
Configure the IAM parameters:
Create hashed passwords for every IAM role:
reader
, writer
, and operator
for bare metal deployments:
./bin/hash-generate -i 27500
The hash-generate utility requests you to enter a password and outputs the parameters required for the next step. Save the password that you enter in a secure location. This password will be used to access the Container Cloud web UI with a specific IAM role.
Example of system response:
passwordSalt: 6ibPZdUfQK8PsOpSmyVJnA==
passwordHash: 23W1l65FBdI3NL7LMiUQG9Cu62bWLTqIsOgdW8xNsqw=
passwordHashAlgorithm: pbkdf2-sha256
passwordHashIterations: 27500
Run the tool several times to generate hashed passwords for every IAM role.
Open templates/cluster.yaml.template
for editing.
In the initUsers
section, add the following parameters for each
IAM role that you generated in the previous step:
passwordSalt
- base64-encoded randomly generated sequence of bytes.
passwordHash
- base64-encoded password hash generated using
passwordHashAlgorithm
with passwordHashIterations
.
Supported algorithms include pbkdf2-sha256
and pbkdf-sha512
.
Optional. Configure external identity provider for IAM.
Configure the Ceph cluster:
In templates/bm/kaascephcluster.yaml.template
:
Configure dedicated networks clusterNet
and publicNet
for Ceph components.
Note
Before Container Cloud 2.4.0, the obligatory parameter
is network:hostNetwork: true
on production environments
Starting from Container Cloud 2.4.0, this parameter is removed, since Ceph uses the host network only
Set up the disk configuration according to your hardware node
specification. Verify that the storageDevices
section
has a valid list of HDD device names and each device is empty,
that is, no file system is present on it. To enable all LCM features
of Ceph controller, set manageOsds
to true
.
If required, configure other parameters as described in Operations Guide: Ceph advanced configuration.
Configuration example:
manageOsds: true
...
# This part of KaaSCephCluster should contain valid networks definition
network:
clusterNet: 10.10.10.0/24
publicNet: 10.10.11.0/24
...
nodes:
master-0:
...
<node_name>:
...
# This part of KaaSCephCluster should contain valid device names
storageDevices:
- name: sdb
config:
deviceClass: hdd
# Each storageDevices dicts can have several devices
- name: sdc
config:
deviceClass: hdd
# All devices for Ceph also should be described to ``wipe`` in
# ``baremetalhosts.yaml.template``
- name: sdd
config:
deviceClass: hdd
# Do not to include first devices here (like vda or sda)
# because they will be allocated for operating system
In machines.yaml.template
, verify that the metadata:name
structure matches the machine names in the spec:nodes
structure of kaascephcluster.yaml.template
.
Verify that the kaas-bootstrap
directory contains the following files:
# tree ~/kaas-bootstrap
~/kaas-bootstrap/
....
├── bootstrap.sh
├── kaas
├── mirantis.lic
├── releases
...
├── templates
....
│ ├── bm
│ │ ├── baremetalhostprofiles.yaml.template
│ │ ├── baremetalhosts.yaml.template
│ │ ├── cluster.yaml.template
│ │ ├── ipam-objects.yaml.template
│ │ ├── kaascephcluster.yaml.template
│ │ └── machines.yaml.template
....
├── templates.backup
....
Export all required parameters using the table below.
export KAAS_BM_ENABLED="true"
#
export KAAS_BM_PXE_IP="10.0.0.20"
export KAAS_BM_PXE_MASK="24"
export KAAS_BM_PXE_BRIDGE="br0"
#
export KAAS_BM_BM_DHCP_RANGE="10.0.0.30,10.0.0.49"
#
export KEYCLOAK_FLOATING_IP="10.0.0.70"
export IAM_FLOATING_IP="10.0.0.71"
export PROXY_FLOATING_IP="10.0.0.72"
unset KAAS_BM_FULL_PREFLIGHT
Parameter |
Description |
Example value |
---|---|---|
|
The provisioning IP address. This address will be assigned to the
interface of the seed node defined by the |
|
|
The CIDR prefix for the PXE network. It will be used with all of the addresses below when assigning them to interfaces. |
|
|
The PXE network bridge name. The name must match the name of the bridge created on the seed node during the Prepare the seed node stage. |
|
|
The |
|
|
The |
|
|
The |
|
|
The |
|
|
The verification preflight check to validate the deployment
before bootstrap. Unset using The |
|
Run the verification preflight
script to validate the deployment
templates configuration:
./bootstrap.sh preflight
The command outputs a human-readable report with the verification details.
The report includes the list of verified bare metal nodes and their
Chassis Power
status.
This status is based on the deployment templates configuration used
during the verification.
Caution
If the report contains information about missing dependencies or incorrect configuration, fix the issues before proceeding to the next step.
Run the bootstrap script:
./bootstrap.sh all
Warning
During the bootstrap process, do not manually restart or power off any of the bare metal hosts.
When the bootstrap is complete, collect and save the following management cluster details in a secure location:
The kubeconfig
file located in the same directory as the bootstrap
script. This file contains the admin credentials
for the management cluster.
The private SSH key openstack_tmp
located in ~/.ssh/
for access to the management cluster nodes.
Note
The SSH key name openstack_tmp
is the same for all cloud
providers. This name will be changed in one of the following
Container Cloud releases to avoid confusion
with a cloud provider name and its related SSH key name.
The URL and credentials for the Container Cloud web UI. The system outputs these details when the bootstrap completes.
The StackLight endpoints. For details, see Operations Guide: Access StackLight web UIs.
The Keycloak URL that the system outputs when the bootstrap completes.
The admin password for Keycloak is located in
kaas-bootstrap/passwords.yml
along with other IAM passwords.
Note
When the bootstrap is complete, the bootstrap cluster resources are freed up.
This section describes the bare metal host profile settings and instructs how to configure this profile before deploying Mirantis Container Cloud on physical servers.
The bare metal host profile is a Kubernetes custom resource. It allows the Infrastructure Operator to define how the storage devices and the operating system are provisioned and configured.
The bootstrap templates for a bare metal deployment include the template for
the default BareMetalHostProfile
object in the following file
that defines the default bare metal host profile:
templates/bm/baremetalhostprofiles.yaml.template
The customization procedure of BareMetalHostProfile
is almost the same for
the management and managed clusters, with the following differences:
For a management cluster, the customization automatically applies
to machines during bootstrap. And for a managed cluster, you apply
the changes using kubectl
before creating a managed cluster.
For a management cluster, you edit the default
baremetalhostprofiles.yaml.template
. And for a managed cluster, you
create a new BareMetalHostProfile
with the necessary configuration.
For the procedure details, see Operations Guide: Create a custom bare metal host profile. Use this procedure for both types of clusters considering the differences described above.
This section describes how to bootstrap an OpenStack-based Mirantis Container Cloud management cluster.
The Infrastructure Operator performs the following steps to install Mirantis Container Cloud on an OpenStack-based environment:
Prepare an OpenStack environment with the requirements described in Reference Architecture: OpenStack-based cluster requirements.
Prepare the bootstrap node using Prerequisites.
Obtain the Mirantis license file that will be required during the bootstrap.
Prepare the OpenStack clouds.yaml
file.
Create and configure the deployment configuration files that include the cluster and machines metadata.
Run the bootstrap script for the fully automated installation of the management cluster.
For more details, see Bootstrap a management cluster.
Before you start with bootstrapping the OpenStack-based management cluster, complete the following prerequisite steps:
Verify that your planned cloud meets the reference hardware bill of material and software requirements as described in Reference Architecture: Requirements for an OpenStack-based Mirantis Container Cloud.
Configure the bootstrap node:
Log in to any personal computer or VM running Ubuntu 18.04 that you will be using as the bootstrap node.
If you use a newly created VM, run:
sudo apt-get update
Install the current Docker version available for Ubuntu 18.04:
sudo apt install docker.io
Grant your USER
access to the Docker daemon:
sudo usermod -aG docker $USER
Log off and log in again to the bootstrap node to apply the changes.
Verify that Docker is configured correctly and has access to Container Cloud CDN. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://binary.mirantis.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
Proceed to Bootstrap a management cluster.
After you complete the prerequisite steps described in Prerequisites, proceed with bootstrapping your OpenStack-based Mirantis Container Cloud management cluster.
To bootstrap an OpenStack-based management cluster:
Log in to the bootstrap node running Ubuntu 18.04 that is configured as described in Prerequisites.
Download and run the Container Cloud bootstrap script:
wget https://binary.mirantis.com/releases/get_container_cloud.sh
chmod 0755 get_container_cloud.sh
./get_container_cloud.sh
Change the directory to the kaas-bootstrap
folder
created by the get_container_cloud.sh
script.
Obtain your license file that will be required during the bootstrap. See step 3 in Getting Started with Mirantis Container Cloud.
Save the license file as mirantis.lic
under the kaas-bootstrap
directory.
Prepare the OpenStack configuration for a new cluster:
Log in to the OpenStack Horizon.
In the Project section, select API Access.
In the right-side drop-down menu Download OpenStack RC File, select OpenStack clouds.yaml File.
Add the downloaded clouds.yaml
file to the directory with the
bootstrap.sh
script.
In clouds.yaml
, add the password
field with your OpenStack
password under the clouds/openstack/auth
section.
Example:
clouds:
openstack:
auth:
auth_url: https://auth.openstack.example.com:5000/v3
username: your_username
password: your_secret_password
project_id: your_project_id
user_domain_name: your_user_domain_name
region_name: RegionOne
interface: public
identity_api_version: 3
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://auth.openstack.example.com:5000/v3"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
Configure the cluster and machines metadata:
Change the directory to the kaas-bootstrap
folder.
In templates/machines.yaml.template
,
modify the spec:providerSpec:value
section for 3 control plane nodes
marked with the cluster.sigs.k8s.io/control-plane
label
by substituting the flavor
and image
parameters
with the corresponding values of the control plane nodes in the related
OpenStack cluster. For example:
spec: &cp_spec
providerSpec:
value:
apiVersion: "openstackproviderconfig.k8s.io/v1alpha1"
kind: "OpenstackMachineProviderSpec"
flavor: kaas.minimal
image: bionic-server-cloudimg-amd64-20190612
Also, modify other parameters as required.
Modify the templates/cluster.yaml.template
parameters to fit your
deployment. For example, add the corresponding values for cidrBlocks
in the spec::clusterNetwork::services
section.
Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In templates/cluster.yaml.template
, add the ntp:servers
section
with the list of required servers names:
spec:
...
providerSpec:
value:
kaas:
...
regional:
- helmReleases:
- name: openstack-provider
values:
config:
lcm:
...
ntp:
servers:
- 0.pool.ntp.org
...
Note
The passwordSalt
and passwordHash
values for the IAM
roles are automatically re-generated during the IAM
configuration described below in this procedure.
Available since 2.5.0 Optional.
If you require all Internet access to go through a proxy server,
in bootstrap.env
, add the following environment variables
to bootstrap the management and regional cluster using proxy:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128
export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Variable |
Format |
---|---|
|
|
|
Comma-separated list of IP addresses or domain names |
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.
Optional. Skip this step to use the default password password
in the Container Cloud web UI.
Configure the IAM parameters:
Create hashed passwords for every IAM role:
reader
, writer
, and operator
for bare metal deployments:
./bin/hash-generate -i 27500
The hash-generate utility requests you to enter a password and outputs the parameters required for the next step. Save the password that you enter in a secure location. This password will be used to access the Container Cloud web UI with a specific IAM role.
Example of system response:
passwordSalt: 6ibPZdUfQK8PsOpSmyVJnA==
passwordHash: 23W1l65FBdI3NL7LMiUQG9Cu62bWLTqIsOgdW8xNsqw=
passwordHashAlgorithm: pbkdf2-sha256
passwordHashIterations: 27500
Run the tool several times to generate hashed passwords for every IAM role.
Open templates/cluster.yaml.template
for editing.
In the initUsers
section, add the following parameters for each
IAM role that you generated in the previous step:
passwordSalt
- base64-encoded randomly generated sequence of bytes.
passwordHash
- base64-encoded password hash generated using
passwordHashAlgorithm
with passwordHashIterations
.
Supported algorithms include pbkdf2-sha256
and pbkdf-sha512
.
Optional. Configure external identity provider for IAM.
Run the bootstrap script:
./bootstrap.sh all
When the bootstrap is complete, collect and save the following management cluster details in a secure location:
The kubeconfig
file located in the same directory as the bootstrap
script. This file contains the admin credentials
for the management cluster.
The private SSH key openstack_tmp
located in ~/.ssh/
for access to the management cluster nodes.
Note
The SSH key name openstack_tmp
is the same for all cloud
providers. This name will be changed in one of the following
Container Cloud releases to avoid confusion
with a cloud provider name and its related SSH key name.
The URL and credentials for the Container Cloud web UI. The system outputs these details when the bootstrap completes.
The StackLight endpoints. For details, see Operations Guide: Access StackLight web UIs.
The Keycloak URL that the system outputs when the bootstrap completes.
The admin password for Keycloak is located in
kaas-bootstrap/passwords.yml
along with other IAM passwords.
Note
When the bootstrap is complete, the bootstrap cluster resources are freed up.
In case of deployment issues, collect and inspect the bootstrap and management cluster logs as described in Troubleshooting.
Optional. Deploy an additional regional cluster as described in Deploy an additional regional cluster (optional).
Now, you can proceed with operating your management cluster using the Container Cloud web UI and deploying managed clusters as described in Create an OpenStack-based managed cluster.
This section describes how to bootstrap a Mirantis Container Cloud management cluster that is based on the Amazon Web Services (AWS) cloud provider.
The Infrastructure Operator performs the following steps to install Mirantis Container Cloud on an AWS-based environment:
Prepare an AWS environment with the requirements described in Reference Architecture: AWS-based Container Cloud cluster requirements.
Prepare the bootstrap node as per Prerequisites.
Obtain the Mirantis license file that will be required during the bootstrap.
Prepare the AWS environment credentials.
Create and configure the deployment configuration files that include the cluster and machines metadata.
Run the bootstrap script for the fully automated installation of the management cluster.
For more details, see Bootstrap a management cluster.
Before you start with bootstrapping the AWS-based management cluster, complete the following prerequisite steps:
Inspect the Requirements for an AWS-based Container Cloud cluster to understand the potential impact of the Container Cloud deployment on your AWS cloud usage.
Configure the bootstrap node:
Log in to any personal computer or VM running Ubuntu 18.04 that you will be using as the bootstrap node.
If you use a newly created VM, run:
sudo apt-get update
Install the current Docker version available for Ubuntu 18.04:
sudo apt install docker.io
Grant your USER
access to the Docker daemon:
sudo usermod -aG docker $USER
Log off and log in again to the bootstrap node to apply the changes.
Verify that Docker is configured correctly and has access to Container Cloud CDN. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://binary.mirantis.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
Proceed to Bootstrap a management cluster.
After you complete the prerequisite steps described in Prerequisites, proceed with bootstrapping your AWS-based Mirantis Container Cloud management cluster.
To bootstrap an AWS-based management cluster:
Log in to the bootstrap node running Ubuntu 18.04 that is configured as described in Prerequisites.
Download and run the Container Cloud bootstrap script:
wget https://binary.mirantis.com/releases/get_container_cloud.sh
chmod 0755 get_container_cloud.sh
./get_container_cloud.sh
Change the directory to the kaas-bootstrap
folder
created by the get_container_cloud.sh
script.
Obtain your license file that will be required during the bootstrap. See step 3 in Getting Started with Mirantis Container Cloud.
Save the license file as mirantis.lic
under the kaas-bootstrap
directory.
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://ec2.amazonaws.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
In templates/aws/machines.yaml.template
,
modify the spec:providerSpec:value
section
by substituting the ami:id
parameter with the corresponding value
for Ubuntu 18.04 from the required AWS region. For example:
spec:
providerSpec:
value:
apiVersion: aws.kaas.mirantis.com/v1alpha1
kind: AWSMachineProviderSpec
instanceType: c5d.2xlarge
ami:
id: ami-033a0960d9d83ead0
Also, modify other parameters as required.
Optional. In templates/aws/cluster.yaml.template
, modify the default
AWS instance types and AMIs configuration for further creation
of managed clusters:
providerSpec:
value:
...
kaas:
...
regional:
- provider: aws
helmReleases:
- name: aws-credentials-controller
values:
config:
allowedInstanceTypes:
minVCPUs: 8
# in MiB
minMemory: 16384
# in GB
minStorage: 120
supportedArchitectures:
- "x86_64"
filters:
- name: instance-storage-info.disk.type
values:
- "ssd"
allowedAMIs:
-
- name: name
values:
- "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20200729"
- name: owner-id
values:
- "099720109477"
Also, modify other parameters as required.
Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In templates/aws/cluster.yaml.template
, add the ntp:servers
section
with the list of required servers names:
spec:
...
providerSpec:
value:
kaas:
...
regional:
- helmReleases:
- name: aws-provider
values:
config:
lcm:
...
ntp:
servers:
- 0.pool.ntp.org
...
Generate the AWS Access Key ID with Secret Access Key for the admin
user and select the AWS default region name.
For details, see AWS General Reference: Programmatic access.
Export the following parameters by adding the corresponding values
for the AWS admin
credentials created in the previous step:
export KAAS_AWS_ENABLED=true
export AWS_SECRET_ACCESS_KEY=XXXXXXX
export AWS_ACCESS_KEY_ID=XXXXXXX
export AWS_DEFAULT_REGION=us-east-2
For Container Cloud to communicate with the AWS APIs, create the AWS CloudFormation stack that contains properly configured IAM users and policies:
./kaas bootstrap aws policy
If you do not have access to create the CloudFormation stack, users, or policies:
Log in to your AWS Management Console.
On the home page, expand the upper right menu with your user name and capture your Account ID.
Create the CloudFormation template:
./kaas bootstrap aws policy --account-id <accountId> --dump > cf.yaml
Substitute the parameter enclosed in angle brackets with the corresponding value.
Send the cf.yaml
template to your AWS account admin to create
the CloudFormation stack from this template.
Generate the AWS Access Key ID with Secret Access Key
for the bootstrapper.cluster-api-provider-aws.kaas.mirantis.com
user, that was created in the previous step,
and select the AWS default region name.
Export the AWS bootstrapper.cluster-api-provider-aws.kaas.mirantis.com
user credentials that were created in the previous step:
export KAAS_AWS_ENABLED=true
export AWS_SECRET_ACCESS_KEY=XXXXXXX
export AWS_ACCESS_KEY_ID=XXXXXXX
export AWS_DEFAULT_REGION=us-east-2
Optional. Skip this step to use the default password password
in the Container Cloud web UI.
Configure the IAM parameters:
Create hashed passwords for every IAM role:
reader
, writer
, and operator
for bare metal deployments:
./bin/hash-generate -i 27500
The hash-generate utility requests you to enter a password and outputs the parameters required for the next step. Save the password that you enter in a secure location. This password will be used to access the Container Cloud web UI with a specific IAM role.
Example of system response:
passwordSalt: 6ibPZdUfQK8PsOpSmyVJnA==
passwordHash: 23W1l65FBdI3NL7LMiUQG9Cu62bWLTqIsOgdW8xNsqw=
passwordHashAlgorithm: pbkdf2-sha256
passwordHashIterations: 27500
Run the tool several times to generate hashed passwords for every IAM role.
Open templates/cluster.yaml.template
for editing.
In the initUsers
section, add the following parameters for each
IAM role that you generated in the previous step:
passwordSalt
- base64-encoded randomly generated sequence of bytes.
passwordHash
- base64-encoded password hash generated using
passwordHashAlgorithm
with passwordHashIterations
.
Supported algorithms include pbkdf2-sha256
and pbkdf-sha512
.
Optional. Configure external identity provider for IAM.
Run the bootstrap script:
./bootstrap.sh all
When the bootstrap is complete, collect and save the following management cluster details in a secure location:
The kubeconfig
file located in the same directory as the bootstrap
script. This file contains the admin credentials
for the management cluster.
The private SSH key openstack_tmp
located in ~/.ssh/
for access to the management cluster nodes.
Note
The SSH key name openstack_tmp
is the same for all cloud
providers. This name will be changed in one of the following
Container Cloud releases to avoid confusion
with a cloud provider name and its related SSH key name.
The URL and credentials for the Container Cloud web UI. The system outputs these details when the bootstrap completes.
The StackLight endpoints. For details, see Operations Guide: Access StackLight web UIs.
The Keycloak URL that the system outputs when the bootstrap completes.
The admin password for Keycloak is located in
kaas-bootstrap/passwords.yml
along with other IAM passwords.
Note
When the bootstrap is complete, the bootstrap cluster resources are freed up.
In case of deployment issues, collect and inspect the bootstrap and management cluster logs as described in Troubleshooting.
Optional. Deploy an additional regional cluster of a different provider type as described in Deploy an additional regional cluster (optional).
Now, you can proceed with operating your management cluster using the Container Cloud web UI and deploying managed clusters as described in Create an AWS-based managed cluster.
Caution
This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For details about the Mirantis Technology Preview support scope, see the Preface section of this guide.
Note
In scope of Technology Preview support for the VMWare vSphere cloud provider, StackLight deployed on a management cluster has limitations related to alerts and Grafana dashboards. For details, see StackLight support for VMWare vSphere.
This section describes how to bootstrap a VMWare vSphere-based Mirantis Container Cloud management cluster.
Perform the following steps to install Mirantis Container Cloud on a VMWare vSphere-based environment:
Prepare a vSphere environment with the requirements described in Reference Architecture: VMWare vSphere-based cluster requirements.
Prepare the bootstrap node as described in Prerequisites.
Obtain the Mirantis license file to use during the bootstrap.
Set up the VMWare accounts for deployment as described in VMWare deployment users.
Create and configure the deployment configuration files that include the cluster and machines metadata as described in Bootstrap a management cluster.
Prepare the OVF template for the management cluster nodes using OVF template requirements.
Run the bootstrap script for the fully automated installation of the management cluster.
For more details, see Bootstrap a management cluster.
Before bootstrapping a VMWare vSphere-based management cluster, complete the following prerequisite steps:
Verify that your planned cloud meets the reference hardware bill of material and software requirements as described in Reference Architecture: Requirements for a VMWare vSphere-based Container Cloud cluster.
Configure the bootstrap node:
Log in to any personal computer or VM running Ubuntu 18.04 that you will be using as the bootstrap node.
If you use a newly created VM, run:
sudo apt-get update
Install the current Docker version available for Ubuntu 18.04:
sudo apt install docker.io
Grant your USER
access to the Docker daemon:
sudo usermod -aG docker $USER
Log off and log in again to the bootstrap node to apply the changes.
Verify that Docker is configured correctly and has access to Container Cloud CDN. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://binary.mirantis.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
To deploy Mirantis Container Cloud on a VMWare vSphere-based environment, prepare the following VMWare accounts:
Log in to the vCenter Server Web Console.
Create a read-only virt-who
user.
The virt-who
user requires at least read-only access
to all objects in the vCenter Data Center.
The virt-who
service on RHEL machines will be provided with the
virt-who
user credentials in order to properly manage RHEL
subscriptions.
For details on how to create the virt-who
user, refer to the
official RedHat Customer Portal
documentation.
Create the cluster-api
user with the following privileges:
Note
Container Cloud uses two separate vSphere accounts for:
Cluster API related operations, such as create or delete VMs, and for preparation of the OVF template using Packer
Storage operations, such as dynamic PVC provisioning
You can also create one user that has all privileges sets mentioned above.
Privilege |
Permission |
---|---|
Content library |
|
Datastore |
|
Folder |
|
Global |
Cancel task |
Host local operations |
|
Network |
Assign network |
Resource |
Assign virtual machine to resource pool |
Scheduled task |
|
Sessions |
|
Storage views |
View |
Tasks |
|
Privilege |
Permission |
---|---|
Change configuration |
|
Interaction |
|
Inventory |
|
Provisioning |
|
Snapshot management |
|
vSphere replication |
Monitor replication |
Create the storage
user with the following privileges:
Note
For more details about all required privileges
for the storage
user, see vSphere Cloud Provider
documentation.
Privilege |
Permission |
---|---|
Cloud Native Storage |
Searchable |
Content library |
View configuration settings |
Datastore |
|
Folder |
|
Host configuration |
|
Host local operations |
|
Host profile |
View |
Profile-driven storage |
Profile-driven storage view |
Resource |
Assign virtual machine to resource pool |
Scheduled task |
|
Sessions |
|
Storage views |
View |
Privilege |
Permission |
---|---|
Change configuration |
|
Inventory |
|
Now, proceed to Bootstrap a management cluster.
Caution
This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For details about the Mirantis Technology Preview support scope, see the Preface section of this guide.
Note
In scope of Technology Preview support for the VMWare vSphere cloud provider, StackLight deployed on a management cluster has limitations related to alerts and Grafana dashboards. For details, see StackLight support for VMWare vSphere.
After you complete the prerequisite steps described in Prerequisites, proceed with bootstrapping your VMWare vSphere-based Mirantis Container Cloud management cluster.
To bootstrap a vSphere-based management cluster:
Log in to the bootstrap node running Ubuntu 18.04 that is configured as described in Prerequisites.
Download and run the Container Cloud bootstrap script:
wget https://binary.mirantis.com/releases/get_container_cloud.sh
chmod 0755 get_container_cloud.sh
./get_container_cloud.sh
Change the directory to the kaas-bootstrap
folder
created by the get_container_cloud.sh
script.
Obtain your license file that will be required during the bootstrap. See step 3 in Getting Started with Mirantis Container Cloud.
Save the license file as mirantis.lic
under the kaas-bootstrap
directory.
In templates/vsphere/rhellicenses.yaml.template
,
set the user name and password of your RedHat Customer Portal account
associated with your RHEL license for Virtual Datacenters.
Optionally, specify the subscription allocation pools to use for the RHEL
subscriptions activation. If you leave the pool field empty,
subscription-manager
will automatically select the licenses for
machines.
Modify templates/vsphere/vsphere-config.yaml.template
:
Parameter |
Description |
---|---|
|
IP address or FQDN of the vCenter Server. |
|
Port of the vCenter Server. For example, |
|
vSphere data center name. |
|
Flag that controls validation of the vSphere Server certificate.
Must be |
|
vSphere Cluster API provider user name. For details, see Prepare the VMWare deployment user setup and permissions. |
|
vSphere Cluster API provider user password. |
|
vSphere Cloud Provider deployment user name. For details, see Prepare the VMWare deployment user setup and permissions. |
|
vSphere Cloud Provider deployment user password. |
Modify the templates/vsphere/cluster.yaml.template
parameters
to fit your deployment. For example, add the corresponding values
for cidrBlocks
in the spec::clusterNetwork::services
section.
Required parameters:
Parameter |
Description |
---|---|
|
Name of the vSphere datastore. You can use different datastores for vSphere Cluster API and vSphere Cloud Provider. |
|
Path to a folder where the cluster machines metadata will be stored. |
|
Path to a network for cluster machines. |
|
Path to a resource pool in which VMs will be created. |
Note
The passwordSalt
and passwordHash
values for the IAM
roles are automatically re-generated during the IAM
configuration described below in this procedure.
In bootstrap.env
, add the following environment variables:
Note
For the Keycloak and IAM services variables,
assign IP addresses from the end of the provided MetalLB range.
For example, if the MetalLB range is 10.20.0.30-10.20.0.50
,
select 10.20.0.48
and 10.20.0.49
as IPs for KeyCloak
and IAM.
Parameter |
Description |
---|---|
|
Set to |
|
IP address for Keycloak from the end of the MetalLB range. |
|
IP address for IAM from the end of MetalLB range. |
Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In templates/vsphere/cluster.yaml.template
, add the ntp:servers
section with the list of required servers names:
spec:
...
providerSpec:
value:
kaas:
...
regional:
- helmReleases:
- name: vsphere-provider
values:
config:
lcm:
...
ntp:
servers:
- 0.pool.ntp.org
...
Prepare the OVF template as described in Prepare the OVF template.
In templates/vsphere/machines.yaml.template
:
Define SSH_USER_NAME
. The default SSH user name is cloud-user
.
Define SET_VSPHERE_TEMPLATE_PATH
prepared in the previous step.
Modify other parameters as required.
spec:
providerSpec:
value:
apiVersion: vsphere.cluster.k8s.io/v1alpha1
kind: VsphereMachineProviderSpec
sshUserName: SSH_USER_NAME
rhelLicense: kaas-mgmt-rhel-license
network:
devices:
- dhcp4: true
dhcp6: false
template: SET_VSPHERE_TEMPLATE_PATH
Available since 2.5.0, Technology Preview Optional.
If you require all Internet access to go through a proxy server,
in bootstrap.env
, add the following environment variables
to bootstrap the management and regional cluster using proxy:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128
export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Variable |
Format |
---|---|
|
|
|
Comma-separated list of IP addresses or domain names |
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.
Optional. Skip this step to use the default password password
in the Container Cloud web UI.
Configure the IAM parameters:
Create hashed passwords for every IAM role:
reader
, writer
, and operator
for bare metal deployments:
./bin/hash-generate -i 27500
The hash-generate utility requests you to enter a password and outputs the parameters required for the next step. Save the password that you enter in a secure location. This password will be used to access the Container Cloud web UI with a specific IAM role.
Example of system response:
passwordSalt: 6ibPZdUfQK8PsOpSmyVJnA==
passwordHash: 23W1l65FBdI3NL7LMiUQG9Cu62bWLTqIsOgdW8xNsqw=
passwordHashAlgorithm: pbkdf2-sha256
passwordHashIterations: 27500
Run the tool several times to generate hashed passwords for every IAM role.
Open templates/cluster.yaml.template
for editing.
In the initUsers
section, add the following parameters for each
IAM role that you generated in the previous step:
passwordSalt
- base64-encoded randomly generated sequence of bytes.
passwordHash
- base64-encoded password hash generated using
passwordHashAlgorithm
with passwordHashIterations
.
Supported algorithms include pbkdf2-sha256
and pbkdf-sha512
.
Optional. Configure external identity provider for IAM.
Run the bootstrap script:
./bootstrap.sh all
When the bootstrap is complete, collect and save the following management cluster details in a secure location:
The kubeconfig
file located in the same directory as the bootstrap
script. This file contains the admin credentials
for the management cluster.
The private SSH key openstack_tmp
located in ~/.ssh/
for access to the management cluster nodes.
Note
The SSH key name openstack_tmp
is the same for all cloud
providers. This name will be changed in one of the following
Container Cloud releases to avoid confusion
with a cloud provider name and its related SSH key name.
The URL and credentials for the Container Cloud web UI. The system outputs these details when the bootstrap completes.
The StackLight endpoints. For details, see Operations Guide: Access StackLight web UIs.
The Keycloak URL that the system outputs when the bootstrap completes.
The admin password for Keycloak is located in
kaas-bootstrap/passwords.yml
along with other IAM passwords.
Note
When the bootstrap is complete, the bootstrap cluster resources are freed up.
Now, you can proceed with operating your management cluster using the Container Cloud web UI and deploying managed clusters as described in Create a VMWare vSphere-based managed cluster.
To deploy Mirantis Container Cloud on a vSphere-based environment, the OVF template for cluster machines must be prepared according to the following requirements:
The VMWare Tools package is installed.
The cloud-init
utility is installed and configured with the
specific VMwareGuestInfo
data source.
The virt-who
service is enabled and configured
to connect to the VMWare vCenter Server to properly apply the
RHEL subscriptions on the nodes.
The following procedures describe how to meet the requirements above either using the Container Cloud script or manually.
To prepare the OVF template using the Container Cloud script:
Prepare the Container Cloud bootstrap and modify
vsphere-config.yaml.template
and
templates/vsphere/cluster.yaml.template
as described in Bootstrap a management cluster, steps 1-9.
Download the RHEL 7.8 DVD ISO from the RedHat Customer Portal.
Export the following variables:
The virt-who
user name and password.
The path to the RHEL 7.8 DVD ISO file.
The vSphere cluster name.
For example:
export KAAS_VSPHERE_ENABLED=true
export VSPHERE_RO_USER=virt-who-user
export VSPHERE_RO_PASSWORD=virt-who-user-password
export VSPHERE_PACKER_ISO_FILE=$(pwd)/rhel-7.8.dvd.iso
export VSPHERE_CLUSTER_NAME=vsphere-cluster-name
Available since 2.5.0, Technology Preview Optional.
If you require all Internet access to go through a proxy server,
in bootstrap.env
, add the following environment variables:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128
export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Variable |
Format |
---|---|
|
|
|
Comma-separated list of IP addresses or domain names |
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.
Prepare the OVF template:
./bootstrap.sh vsphere_template
After the template is prepared, set the SET_VSPHERE_TEMPLATE_PATH
parameter in templates/vsphere/machines.yaml.template
as described
in Bootstrap a management cluster.
To prepare the OVF template manually:
Run a virtual machine on the vSphere data center from the official RHEL 7.8 server image. Specify the amount of resources that will be used in the Container Cloud setup. A minimal resources configuration must match the requirements for a vSphere-based Container Cloud cluster.
Select minimal setup in the VM installation configuration. Create a user with root or sudo permissions to access the machine.
Log in to the VM when it starts.
Available since 2.5.0, Technology Preview Optional.
If you require all Internet access to go through a proxy server,
in bootstrap.env
, add the following environment variables:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128
export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Variable |
Format |
---|---|
|
|
|
Comma-separated list of IP addresses or domain names |
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.
Attach your RHEL license for Virtual Datacenters to the VM:
subscription-manager register
# automatic subscription selection:
subscription-manager attach --auto
# or specify pool id:
subscription-manager attach --pool=<POOL_ID>
# verify subscription status
subscription-manager status
Select from the following options:
Prepare the operating system automatically:
Download the automation script:
curl https://gerrit.mcp.mirantis.com/plugins/gitiles/kubernetes/vmware-guestinfo/+/refs/tags/ v1.1.1/install.sh?format=TEXT | \
base64 -d > install.sh
chmod +x install.sh
Export the vCenter Server credentials of the read-only user. For example:
export VC_SERVER='vcenter1.example.com'
export VC_USER='domain\vmware_read_only_username'
export VC_PASSWORD='password!23'
# optional parameters:
export VC_HYPERVISOR_ID=hostname
export VC_FILTER_HOSTS="esx1.example.com, esx2.example.com"
export VCENTER_CONFIG_PATH="/etc/virt-who.d/vcenter.conf"
Run the installation script:
./install.sh
Prepare the operating system manually:
Install open-vm-tools
:
yum install open-vm-tools -y
Install and configure cloud-init
:
Download the VMwareGuestInfo
data source files:
curl https://gerrit.mcp.mirantis.com/plugins/gitiles/kubernetes/vmware-guestinfo/+/refs/tags/v1.1.1/DataSourceVMwareGuestInfo.py?format=TEXT | \
base64 -d > DataSourceVMwareGuestInfo.py
curl https://gerrit.mcp.mirantis.com/plugins/gitiles/kubernetes/vmware-guestinfo/+/refs/tags/v1.1.1/99-DataSourceVMwareGuestInfo.cfg?format=TEXT | \
base64 -d > 99-DataSourceVMwareGuestInfo.cfg
Add 99-DataSourceVMwareGuestInfo.cfg
to /etc/cloud/cloud.cfg.d/
.
Depending on the Python version on the VM operating system,
add DataSourceVMwareGuestInfo.py
to the cloud-init
sources
folder.
Obtain the cloud-init
folder on RHEL:
yum install cloud-init -y
python -c 'import os; from cloudinit import sources; print(os.path.dirname(sources.__file__));'
Prepare the virt-who
user configuration:
Note
For details about the virt-who
user creation,
see Prepare the VMWare deployment user setup and permissions.
Install virt-who
:
yum install virt-who -y
cp /etc/virt-who.d/template.conf /etc/virt-who.d/vcenter.conf
Set up the file content using the following example:
[vcenter]
type=esx
server=vcenter1.example.com
username=domain\vmware_read_only_username
encrypted_password=bd257f93d@482B76e6390cc54aec1a4d
owner=1234567
hypervisor_id=hostname
filter_hosts=esx1.example.com, esx2.example.com
Parameter |
Description |
---|---|
|
Name of the vCenter data center. |
|
Specifies the connection of the defined |
|
The FQDN of the vCenter Server. |
|
The |
|
The |
|
The organization that the hypervisors belong to. |
|
Specifies how to identify the hypervisors. Use a host name
to provide meaningful host names to the Subscription Management.
Alternatively, use |
|
List of hypervisors that never run RHEL VMs.
Such hypervisors do not have to be reported by |
Remove the RHEL subscription from the node.
subscription-manager remove --all
subscription-manager unregister
subscription-manager clean
Shut down the VM.
Create an OVF template from the VM.
Now, proceed to Bootstrap a management cluster.
After you bootstrap a management cluster of the required cloud provider type, you can optionally deploy an additional regional cluster of the same or different provider type. Perform this procedure if you wish to operate managed clusters across clouds from a single Mirantis Container Cloud management plane.
Regional cluster provider |
Bare metal |
AWS |
OpenStack |
vSphere Tech Preview |
---|---|---|---|---|
Bare metal management cluster |
✗ |
✗ |
✓ |
✗ |
AWS management cluster |
✗ |
✓ |
✓ |
✓ |
OpenStack management cluster |
✗ |
✗ |
✓ |
✗ |
vSphere management cluster Tech Preview |
✗ |
✗ |
✗ |
✓ |
Multi-regional deployment enables you to create managed clusters of several provider types using one management cluster. For example, you can bootstrap an AWS-based management cluster and deploy an OpenStack-based regional cluster on this management cluster. Such cluster enables creation of OpenStack-based and AWS-based managed clusters with Kubernetes deployments.
Note
The integration of baremetal-based support for deploying additional regional clusters is in the final development stage and will be announced separately in one of the upcoming Mirantis Container Cloud releases.
This section describes how to deploy an additional OpenStack, AWS, or VMWare vSphere-based regional cluster on an existing management cluster.
If you want to deploy AWS-based managed clusters of different configurations, deploy an additional regional cluster with specific settings that differ from the AWS-based management cluster configuration.
To deploy an AWS-based regional cluster:
Log in to the node where you bootstrapped a management cluster.
Prepare the AWS configuration for the new regional cluster:
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://ec2.amazonaws.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
Change the directory to the kaas-bootstrap
folder.
In templates/aws/machines.yaml.template
,
modify the spec:providerSpec:value
section
by substituting the ami:id
parameter with the corresponding value
for Ubuntu 18.04 from the required AWS region. For example:
spec:
providerSpec:
value:
apiVersion: aws.kaas.mirantis.com/v1alpha1
kind: AWSMachineProviderSpec
instanceType: c5d.2xlarge
ami:
id: ami-033a0960d9d83ead0
Also, modify other parameters as required.
Optional. In templates/aws/cluster.yaml.template
, modify the default
configuration of the AWS instance types and AMI IDs for further creation
of managed clusters:
providerSpec:
value:
...
kaas:
...
regional:
- provider: aws
helmReleases:
- name: aws-credentials-controller
values:
config:
allowedInstanceTypes:
minVCPUs: 8
# in MiB
minMemory: 16384
# in GB
minStorage: 120
supportedArchitectures:
- "x86_64"
filters:
- name: instance-storage-info.disk.type
values:
- "ssd"
allowedAMIs:
-
- name: name
values:
- "ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20200729"
- name: owner-id
values:
- "099720109477"
Also, modify other parameters as required.
Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In
templates/aws/cluster.yaml.template
, add thentp:servers
section with the list of required servers names:spec: ... providerSpec: value: kaas: ... regional: - helmReleases: - name: aws-provider values: config: lcm: ... ntp: servers: - 0.pool.ntp.org ...
Generate the AWS Access Key ID with Secret Access Key
for the bootstrapper.cluster-api-provider-aws.kaas.mirantis.com
user and select the AWS default region name.
Export the AWS bootstrapper.cluster-api-provider-aws.kaas.mirantis.com
user credentials that were created in the previous step:
export KAAS_AWS_ENABLED=true
export AWS_SECRET_ACCESS_KEY=XXXXXXX
export AWS_ACCESS_KEY_ID=XXXXXXX
export AWS_DEFAULT_REGION=us-east-2
Export the following parameters:
export KUBECONFIG=<pathToMgmtClusterKubeconfig>
export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location
the kubeconfig-<regionalClusterName>
file
located in the same directory as the bootstrap script.
This file contains the admin credentials for the regional cluster.
# |
Description |
---|---|
1 |
Prepare the bootstrap cluster for the new regional cluster. |
2 |
Load the updated Container Cloud CRDs for |
3 |
Connect to each machine of the management cluster through SSH. |
4 |
Wait for the |
5 |
Load the following objects to the new regional cluster: |
6 |
Forward the bootstrap cluster endpoint to |
7 |
Wait for all CRDs to be available and verify the objects created using these CRDs. |
8 |
Pivot the cluster API stack to the regional cluster. |
9 |
Switch the LCM agent from the bootstrap cluster to the regional one. |
10 |
Wait for the Container Cloud components to start on the regional cluster. |
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate a managed cluster.
You can deploy an additional regional OpenStack-based cluster on top of the AWS, bare metal, or OpenStack management cluster to create managed clusters of several provider types if required.
To deploy an OpenStack-based regional cluster:
Log in to the node where you bootstrapped a management cluster.
Prepare the OpenStack configuration for a new regional cluster:
Log in to the OpenStack Horizon.
In the Project section, select API Access.
In the right-side drop-down menu Download OpenStack RC File, select OpenStack clouds.yaml File.
Add the downloaded clouds.yaml
file to the directory with the
bootstrap.sh
script.
In clouds.yaml
, add the password
field with your OpenStack
password under the clouds/openstack/auth
section.
Example:
clouds:
openstack:
auth:
auth_url: https://auth.openstack.example.com:5000/v3
username: your_username
password: your_secret_password
project_id: your_project_id
user_domain_name: your_user_domain_name
region_name: RegionOne
interface: public
identity_api_version: 3
Verify access to the target cloud endpoint from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://auth.openstack.example.com:5000/v3"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
Configure the cluster and machines metadata:
Change the directory to the kaas-bootstrap
folder.
In templates/machines.yaml.template
,
modify the spec:providerSpec:value
section for 3 control plane nodes
marked with the cluster.sigs.k8s.io/control-plane
label
by substituting the flavor
and image
parameters
with the corresponding values of the control plane nodes in the related
OpenStack cluster. For example:
spec: &cp_spec
providerSpec:
value:
apiVersion: "openstackproviderconfig.k8s.io/v1alpha1"
kind: "OpenstackMachineProviderSpec"
flavor: kaas.minimal
image: bionic-server-cloudimg-amd64-20190612
Also, modify other parameters as required.
Modify the templates/cluster.yaml.template
parameters to fit your
deployment. For example, add the corresponding values for cidrBlocks
in the spec::clusterNetwork::services
section.
Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In templates/cluster.yaml.template
, add the ntp:servers
section
with the list of required servers names:
spec:
...
providerSpec:
value:
kaas:
...
regional:
- helmReleases:
- name: openstack-provider
values:
config:
lcm:
...
ntp:
servers:
- 0.pool.ntp.org
...
Available since 2.5.0 Optional.
If you require all Internet access to go through a proxy server,
in bootstrap.env
, add the following environment variables
to bootstrap the regional cluster using proxy:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128
export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Variable |
Format |
---|---|
|
|
|
Comma-separated list of IP addresses or domain names |
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.
Clean up the environment configuration:
If you are deploying the regional cluster on top of a baremetal-based management cluster, unset the following parameters:
unset KAAS_BM_ENABLED KAAS_BM_FULL_PREFLIGHT KAAS_BM_PXE_IP \
KAAS_BM_PXE_MASK KAAS_BM_PXE_BRIDGE KAAS_BM_BM_DHCP_RANGE \
TEMPLATES_DIR
If you are deploying the regional cluster on top of an AWS-based
management cluster, unset the KAAS_AWS_ENABLED
parameter:
unset KAAS_AWS_ENABLED
Export the following parameters:
export KUBECONFIG=<pathToMgmtClusterKubeconfig>
export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location
the kubeconfig-<regionalClusterName>
file
located in the same directory as the bootstrap script.
This file contains the admin credentials for the regional cluster.
# |
Description |
---|---|
1 |
Prepare the bootstrap cluster for the new regional cluster. |
2 |
Load the updated Container Cloud CRDs for |
3 |
Connect to each machine of the management cluster through SSH. |
4 |
Wait for the |
5 |
Load the following objects to the new regional cluster: |
6 |
Forward the bootstrap cluster endpoint to |
7 |
Wait for all CRDs to be available and verify the objects created using these CRDs. |
8 |
Pivot the cluster API stack to the regional cluster. |
9 |
Switch the LCM agent from the bootstrap cluster to the regional one. |
10 |
Wait for the Container Cloud components to start on the regional cluster. |
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate a managed cluster.
Caution
This feature is available as Technology Preview. Use such configuration for testing and evaluation purposes only. For details about the Mirantis Technology Preview support scope, see the Preface section of this guide.
You can deploy an additional regional VMWare vSphere-based cluster on top of the AWS or vSphere management cluster to create managed clusters with different configurations if required.
To deploy a vSphere-based regional cluster:
Log in to the node where you bootstrapped a management cluster.
Prepare the vSphere configuration for the new regional cluster:
Verify access to the target vSphere cluster from Docker. For example:
docker run --rm alpine sh -c "apk add --no-cache curl; \
curl https://vsphere.server.com"
The system output must contain no error records. In case of issues, follow the steps provided in Troubleshooting.
In templates/vsphere/rhellicenses.yaml.template
,
set the user name and password of your RedHat Customer Portal account
associated with your RHEL license for Virtual Datacenters.
Optionally, specify the subscription allocation pools to use for the RHEL
subscriptions activation. If you leave the pool field empty,
subscription-manager
will automatically select the licenses for
machines.
Modify the templates/vsphere/cluster.yaml.template
parameters
to fit your deployment.
Required parameters:
Parameter |
Description |
---|---|
|
vSphere datastore name. You can use different datastores for vSphere Cluster API and vSphere Cloud Provider. |
|
Path to a folder where the cluster machines metadata will be stored. |
|
Path to a network for cluster machines. |
|
Path to a resource pool in which VMs will be created. |
Note
The passwordSalt
and passwordHash
values for the IAM
roles are automatically re-generated during the IAM
configuration described below in this procedure.
Available since 2.5.0 Optional. Configure the regional NTP server parameters to be applied to all machines of regional and managed clusters in the specified region.
In templates/vsphere/cluster.yaml.template
, add the ntp:servers
section with the list of required servers names:
spec:
...
providerSpec:
value:
kaas:
...
regional:
- helmReleases:
- name: vsphere-provider
values:
config:
lcm:
...
ntp:
servers:
- 0.pool.ntp.org
...
Modify templates/vsphere/vsphere-config.yaml.template
:
Parameter |
Description |
---|---|
|
IP address or FQDN of the vCenter Server. |
|
Port of the vCenter Server. Leave empty to use |
|
vSphere data center name. |
|
Flag that controls validation of the vSphere Server certificate. |
|
vSphere Cluster API provider user name. For details, see Prepare the VMWare deployment user setup and permissions. |
|
vSphere Cluster API provider user password. |
|
vSphere Cloud Provider deployment user name. For details, see Prepare the VMWare deployment user setup and permissions. |
|
vSphere Cloud Provider deployment user password. |
Prepare the OVF template as described in Prepare the OVF template.
In templates/vsphere/machines.yaml.template
:
Define SSH_USER_NAME
. The default SSH user name is cloud-user
.
Define SET_VSPHERE_TEMPLATE_PATH
prepared in the previous step.
Modify other parameters as required.
spec:
providerSpec:
value:
apiVersion: vsphere.cluster.k8s.io/v1alpha1
kind: VsphereMachineProviderSpec
sshUserName: SSH_USER_NAME
rhelLicense: kaas-mgmt-rhel-license
network:
devices:
- dhcp4: true
dhcp6: false
template: SET_VSPHERE_TEMPLATE_PATH
Available since 2.5.0, Technology Preview Optional.
If you require all Internet access to go through a proxy server,
in bootstrap.env
, add the following environment variables
to bootstrap the regional cluster using proxy:
HTTP_PROXY
HTTPS_PROXY
NO_PROXY
Example snippet:
export HTTP_PROXY=http://proxy.example.com:3128
export HTTPS_PROXY=http://user:pass@proxy.example.com:3128
export NO_PROXY=172.18.10.0,registry.internal.lan
The following variables formats are accepted:
Variable |
Format |
---|---|
|
|
|
Comma-separated list of IP addresses or domain names |
For the list of Mirantis resources and IP addresses to be accessible from the Container Cloud clusters, see Reference Architecture: Hardware and system requirements.
Export the following parameters:
export KAAS_VSPHERE_ENABLED=true
export KUBECONFIG=<pathToMgmtClusterKubeconfig>
export REGIONAL_CLUSTER_NAME=<newRegionalClusterName>
export REGION=<NewRegionName>
Substitute the parameters enclosed in angle brackets with the corresponding values of your cluster.
Run the regional cluster bootstrap script:
./bootstrap.sh deploy_regional
Note
When the bootstrap is complete, obtain and save in a secure location
the kubeconfig-<regionalClusterName>
file
located in the same directory as the bootstrap script.
This file contains the admin credentials for the regional cluster.
# |
Description |
---|---|
1 |
Prepare the bootstrap cluster for the new regional cluster. |
2 |
Load the updated Container Cloud CRDs for |
3 |
Connect to each machine of the management cluster through SSH. |
4 |
Wait for the |
5 |
Load the following objects to the new regional cluster: |
6 |
Forward the bootstrap cluster endpoint to |
7 |
Wait for all CRDs to be available and verify the objects created using these CRDs. |
8 |
Pivot the cluster API stack to the regional cluster. |
9 |
Switch the LCM agent from the bootstrap cluster to the regional one. |
10 |
Wait for the Container Cloud components to start on the regional cluster. |
Now, you can proceed with deploying the managed clusters of supported provider types as described in Create and operate a managed cluster.
This section provides solutions to the issues that may occur while deploying a management cluster.
If the bootstrap script fails during the deployment process, collect and inspect the bootstrap and management cluster logs.
To collect the bootstrap logs:
Log in to your local machine where the bootstrap script was executed.
Run the following command:
./bootstrap.sh collect_logs
The logs are collected in the directory where the bootstrap script is located.
The Container Cloud logs structure in <output_dir>/<cluster_name>/
is as follows:
/events.log
- human-readable table that contains information
about the cluster events
/system
- system logs
/system/<machine_name>/ucp
- Mirantis Kuberntes Engine (MKE) logs
/objects/cluster
- logs of the non-namespaced Kubernetes objects
/objects/namespaced
- logs of the namespaced Kubernetes objects
/objects/namespaced/<namespaceName>/core/pods
- pods logs from a specified Kubernetes namespace
/objects/namespaced/<namespaceName>/core/pods/<containerName>.prev.log
- logs of the pods from a specified Kubernetes namespace
that were previously removed or failed
Depending on the type of issue found in logs, apply the corresponding fixes.
For example, if you detect the LoadBalancer ERROR state
errors
during the bootstrap of an OpenStack-based management cluster,
contact your system administrator to fix the issue.
To troubleshoot other issues, refer to the corresponding section
in Troubleshooting.
If you have issues related to the DNS settings, the following error message may occur:
curl: (6) Could not resolve host
The issue may occur if a VPN is used to connect to the cloud or a local DNS forwarder is set up.
The workaround is to change the default DNS settings for Docker:
Log in to your local machine.
Identify your internal or corporate DNS server address:
systemd-resolve --status
Create or edit /etc/docker/daemon.json
by specifying your DNS address:
{
"dns": ["<YOUR_DNS_ADDRESS>"]
}
Restart the Docker daemon:
sudo systemctl restart docker
If you have issues related to the default network address configuration, cURL either hangs or the following error occurs:
curl: (7) Failed to connect to xxx.xxx.xxx.xxx port xxxx: Host is unreachable
The issue may occur because the default Docker network address
172.17.0.0/16
overlaps with your cloud address or other addresses
of the network configuration.
Workaround:
Log in to your local machine.
Verify routing to the IP addresses of the target cloud endpoints:
Obtain the IP address of your target cloud. For example:
nslookup auth.openstack.example.com
Example of system response:
Name: auth.openstack.example.com
Address: 172.17.246.119
Verify that this IP address is not routed through docker0
but
through any other interface, for example, ens3
:
ip r get 172.17.246.119
Example of the system response if the routing is configured correctly:
172.17.246.119 via 172.18.194.1 dev ens3 src 172.18.1.1 uid 1000
cache
Example of the system response if the routing is configured incorrectly:
172.17.246.119 via 172.18.194.1 dev docker0 src 172.18.1.1 uid 1000
cache
If the routing is incorrect, change the IP address of the default Docker bridge:
Create or edit /etc/docker/daemon.json
by adding the "bip"
option:
{
"bip": "192.168.91.1/24"
}
Restart the Docker daemon:
sudo systemctl restart docker
If you execute the bootstrap.sh
script from an OpenStack VM
that is running on the OpenStack environment used for bootstrapping
the management cluster, the following error messages may occur
that can be related to the MTU settings discrepancy:
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to server:port
Failed to check if machine "<machine_name>" exists:
failed to create provider client ... TLS handshake timeout
To identify whether the issue is MTU-related:
Log in to the OpenStack VM in question.
Compare the MTU outputs for the docker0
and ens3
interfaces:
ip addr
Example of system response:
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500...
...
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450...
If the MTU output values differ for docker0
and ens3
, proceed
with the workaround below. Otherwise, inspect the logs further
to identify the root cause of the error messages.
Workaround:
In your OpenStack environment used for Mirantis Container
Cloud, log in to any machine with CLI access to OpenStack.
For example, you can create a new Ubuntu VM (separate from the bootstrap VM)
and install the python-openstackclient
package on it.
Change the vXLAN MTU size for the VM to the required value depending on your network infrastructure and considering your physical network configuration, such as Jumbo frames, and so on.
openstack network set --mtu <YOUR_MTU_SIZE> <network-name>
Stop and start the VM in Nova.
Log in to the bootstrap VM dedicated for the management cluster.
Re-execute the bootstrap.sh
script.
This section describes how to configure authentication for Mirantis Container Cloud depending on the external identity provider type integrated to your deployment.
If you integrate LDAP for IAM to Mirantis Container Cloud,
add the required LDAP configuration to cluster.yaml.template
during the bootstrap of the management cluster.
Note
The example below defines the recommended non-anonymous
authentication type. If you require anonymous authentication,
replace the following parameters with authType: "none"
:
authType: "simple"
bindCredential: ""
bindDn: ""
To configure LDAP for IAM:
Select from the following options:
For a baremetal-based management cluster, open the
templates/bm/cluster.yaml.template
file for editing.
For an OpenStack management cluster, open the
templates/cluster.yaml.template
file for editing.
For an AWS-based management cluster, open the
templates/aws/cluster.yaml.template
file for editing.
Configure the keycloak:userFederation:providers:
and keycloak:userFederation:mappers:
sections as required:
Note
Verify that the userFederation
section is located
on the same level as the initUsers
section.
Verify that all attributes set in the mappers
section
are defined for users in the specified LDAP system.
Missing attributes may cause authorization issues.
spec:
providerSpec:
value:
kaas:
management:
helmReleases:
- name: iam
values:
keycloak:
userFederation:
providers:
- displayName: "<LDAP_NAME>"
providerName: "ldap"
priority: 1
fullSyncPeriod: -1
changedSyncPeriod: -1
config:
pagination: "true"
debug: "false"
searchScope: "1"
connectionPooling: "true"
usersDn: "<DN>" # "ou=People, o=<ORGANIZATION>, dc=<DOMAIN_COMPONENT>"
userObjectClasses: "inetOrgPerson,organizationalPerson"
usernameLDAPAttribute: "uid"
rdnLDAPAttribute: "uid"
vendor: "ad"
editMode: "READ_ONLY"
uuidLDAPAttribute: "uid"
connectionUrl: "ldap://<LDAP_DNS>"
syncRegistrations: "false"
authType: "simple"
bindCredential: ""
bindDn: ""
mappers:
- name: "username"
federationMapperType: "user-attribute-ldap-mapper"
federationProviderDisplayName: "<LDAP_NAME>"
config:
ldap.attribute: "uid"
user.model.attribute: "username"
is.mandatory.in.ldap: "true"
read.only: "true"
always.read.value.from.ldap: "false"
- name: "full name"
federationMapperType: "full-name-ldap-mapper"
federationProviderDisplayName: "<LDAP_NAME>"
config:
ldap.full.name.attribute: "cn"
read.only: "true"
write.only: "false"
- name: "last name"
federationMapperType: "user-attribute-ldap-mapper"
federationProviderDisplayName: "<LDAP_NAME>"
config:
ldap.attribute: "sn"
user.model.attribute: "lastName"
is.mandatory.in.ldap: "true"
read.only: "true"
always.read.value.from.ldap: "true"
- name: "email"
federationMapperType: "user-attribute-ldap-mapper"
federationProviderDisplayName: "<LDAP_NAME>"
config:
ldap.attribute: "mail"
user.model.attribute: "email"
is.mandatory.in.ldap: "false"
read.only: "true"
always.read.value.from.ldap: "true"
Now, return to the bootstrap instruction depending on the provider type of your management cluster.
Caution
The instruction below applies to the DNS-based management clusters. If you bootstrap a non-DNS-based management cluster, configure Google OAuth IdP for Keycloak after bootstrap using the official Keycloak documentation.
If you integrate Google OAuth external identity provider for IAM to
Mirantis Container Cloud, create the authorization credentials for IAM
in your Google OAuth account and configure cluster.yaml.template
during the bootstrap of the management cluster.
To configure Google OAuth IdP for IAM:
Create Google OAuth credentials for IAM:
Log in to your https://console.developers.google.com.
Navigate to Credentials.
In the APIs Credentials menu, select OAuth client ID.
In the window that opens:
In the Application type menu, select Web application.
In the Authorized redirect URIs field, type in
<keycloak-url>/auth/realms/iam/broker/google/endpoint
,
where <keycloak-url>
is the corresponding DNS address.
Press Enter to add the URI.
Click Create.
A page with your client ID and client secret opens. Save these credentials for further usage.
Log in to the bootstrap node.
Select from the following options:
For a baremetal-based management cluster, open the
templates/bm/cluster.yaml.template
file for editing.
For an OpenStack management cluster, open the
templates/cluster.yaml.template
file for editing.
For an AWS-based management cluster, open the
templates/aws/cluster.yaml.template
file for editing.
In the keycloak:externalIdP:
section, add the following snippet
with your credentials created in previous steps:
keycloak:
externalIdP:
google:
enabled: true
config:
clientId: <Google_OAuth_client_ID>
clientSecret: <Google_OAuth_client_secret>
Now, return to the bootstrap instruction depending on the provider type of your management cluster.