If you need to expand the size of VCP to handle a bigger data plane, you can add more controller nodes to your cloud environment. This section instructs on how to add a KVM node and an OpenStack controller VM to an existing environment.
The same procedure can be applied for scaling the messaging, database, and any other services.
Additional parameters will have to be added before the deployment.
To add a controller node:
Add a physical node using MAAS as described in the MCP Deployment Guide: Provision physical nodes using MAAS.
Log in to the Salt Master node.
In the /classes/cluster/<cluster_name>/infra/init.yml
file,
define the basic parameters for the new KVM node:
parameters:
_param:
infra_kvm_node04_address: <IP ADDRESS ON CONTROL NETWORK>
infra_kvm_node04_deploy_address: <IP ADDRESS ON DEPLOY NETWORK>
infra_kvm_node04_storage_address: ${_param:infra_kvm_node04_address}
infra_kvm_node04_public_address: ${_param:infra_kvm_node04_address}
infra_kvm_node04_hostname: kvm<NUM>
glusterfs_node04_address: ${_param:infra_kvm_node04_address}
linux:
network:
host:
kvm04:
address: ${_param:infra_kvm_node04_address}
names:
- ${_param:infra_kvm_node04_hostname}
- ${_param:infra_kvm_node04_hostname}.${_param:cluster_domain}
In the /classes/cluster/<cluster_name>/openstack/init.yml
file,
define the basic parameters for the new OpenStack controller node.
openstack_control_node<NUM>_address: <IP_ADDRESS_ON_CONTROL_NETWORK>
openstack_control_node<NUM>_hostname: <HOSTNAME>
openstack_database_node<NUM>_address: <DB_IP_ADDRESS>
openstack_database_node<NUM>_hostname: <DB_HOSTNAME>
openstack_message_queue_node<NUM>_address: <IP_ADDRESS_OF_MESSAGE_QUEUE>
openstack_message_queue_node<NUM>_hostname: <HOSTNAME_OF_MESSAGE_QUEUE>
Example of configuration:
kvm04_control_ip: 10.167.4.244
kvm04_deploy_ip: 10.167.5.244
kvm04_name: kvm04
openstack_control_node04_address: 10.167.4.14
openstack_control_node04_hostname: ctl04
In the /classes/cluster/<cluster_name>/infra/config.yml
file,
define the configuration parameters for the KVM and OpenStack controller
nodes. For example:
reclass:
storage:
node:
infra_kvm_node04:
name: ${_param:infra_kvm_node04_hostname}
domain: ${_param:cluster_domain}
classes:
- cluster.${_param:cluster_name}.infra.kvm
params:
keepalived_vip_priority: 103
salt_master_host: ${_param:reclass_config_master}
linux_system_codename: xenial
single_address: ${_param:infra_kvm_node04_address}
deploy_address: ${_param:infra_kvm_node04_deploy_address}
public_address: ${_param:infra_kvm_node04_public_address}
storage_address: ${_param:infra_kvm_node04_storage_address}
openstack_control_node04:
name: ${_param:openstack_control_node04_hostname}
domain: ${_param:cluster_domain}
classes:
- cluster.${_param:cluster_name}.openstack.control
params:
salt_master_host: ${_param:reclass_config_master}
linux_system_codename: xenial
single_address: ${_param:openstack_control_node04_address}
keepalived_vip_priority: 104
opencontrail_database_id: 4
rabbitmq_cluster_role: slave
In the /classes/cluster/<cluster_name>/infra/kvm.yml
file,
define new brick for GlusterFS on all KVM nodes and salt:control
which later spawns the OpenStack controller node. For example:
_param:
cluster_node04_address: ${_param:infra_kvm_node04_address}
glusterfs:
server:
volumes:
glance:
replica: 4
bricks:
- ${_param:cluster_node04_address}:/srv/glusterfs/glance
keystone-keys:
replica: 4
bricks:
- ${_param:cluster_node04_address}:/srv/glusterfs/keystone-keys
keystone-credential-keys:
replica: 4
bricks:
- ${_param:cluster_node04_address}:/srv/glusterfs/keystone-credential-keys
salt:
control:
cluster:
internal:
domain: ${_param:cluster_domain}
engine: virt
node:
ctl04:
name: ${_param:openstack_control_node04_hostname}
provider: ${_param:infra_kvm_node04_hostname}.${_param:cluster_domain}
image: ${_param:salt_control_xenial_image}
size: openstack.control
In the /classes/cluster/<cluster_name>/openstack/control.yml
file,
add the OpenStack controller node into existing services such as HAProxy,
and others, depending on your environment configuration.
Example of adding an HAProxy host for Glance:
_param:
cluster_node04_hostname: ${_param:openstack_control_node04_hostname}
cluster_node04_address: ${_param:openstack_control_node04_address}
haproxy:
proxy:
listen:
glance_api:
servers:
- name: ${_param:cluster_node04_hostname}
host: ${_param:cluster_node04_address}
port: 9292
params: check inter 10s fastinter 2s downinter 3s rise 3 fall 3
glance_registry_api:
servers:
- name: ${_param:cluster_node04_hostname}
host: ${_param:cluster_node04_address}
port: 9191
params: check
Refresh the deployed pillar data by applying the reclass.storage state:
salt '*cfg*' state.sls reclass.storage
Verify that the target node has connectivity with the Salt Master node:
salt '*kvm<NUM>*' test.ping
Verify that the Salt Minion nodes are synchronized:
salt '*' saltutil.sync_all
On the Salt Master node, apply the Salt linux state for the added node:
salt -C 'I@salt:control' state.sls linux
On the added node, verify that salt-common
and salt-minion
have
the 2017.7
version.
apt-cache policy salt-common
apt-cache policy salt-minion
Note
If the commands above show a different version, follow the MCP Deployment guide: Install the correct versions of salt-common and salt-minion.
Perform the initial Salt configuration:
salt -C 'I@salt:control' state.sls salt.minion
Set up the network interfaces and the SSH access:
salt -C 'I@salt:control' state.sls linux.system.user,openssh,linux.network,ntp
Reboot the KVM node:
salt '*kvm<NUM>*' cmd.run 'reboot'
On the Salt Master node, apply the libvirt state:
salt -C 'I@salt:control' state.sls libvirt
On the Salt Master node, create a controller VM for the added physical node:
salt -C 'I@salt:control' state.sls salt.control
Note
Salt virt
takes the name of a virtual machine and
registers the virtual machine on the Salt Master node.
Once created, the instance picks up an IP address from the MAAS DHCP service and the key will be seen as accepted on the Salt Master node.
Verify that the controller VM has connectivity with the Salt Master node:
salt 'ctl<NUM>*' test.ping
Verify that the Salt Minion nodes are synchronized:
salt '*' saltutil.sync_all
Apply the Salt highstate for the controller VM:
salt -C 'I@salt:control' state.highstate
Verify that the added controller node is registered on the Salt Master node:
salt-key
To reconfigure VCP VMs, run the openstack-deploy
Jenkins pipeline
with all necessary install parmeters as described in
MCP Deployment guide: Deploy an OpenStack environment.