Warning
The DevOps Portal has been deprecated in the Q4`18 MCP release tagged with the 2019.2.0 Build ID.
If the Reclass model of your deployment does not include metadata for OSS services, you must define it in the Reclass model before proceeding with the deployment of the DevOps portal.
To configure OSS services in the Reclass model:
In the init.yml
file in the
/srv/salt/reclass/classes/cluster/${_param:cluster_name}/cicd/control/
directory, define the required classes.
The following code snippet contains all services currently available. To configure your deployment for a specific use case, comment out the services that are not required:
classes:
# GlusterFS
- system.glusterfs.server.volume.devops_portal
- system.glusterfs.server.volume.elasticsearch
- system.glusterfs.server.volume.mongodb
- system.glusterfs.server.volume.postgresql
- system.glusterfs.server.volume.pushkin
- system.glusterfs.server.volume.rundeck
- system.glusterfs.server.volume.security_monkey
- system.glusterfs.client.volume.devops_portal
- system.glusterfs.client.volume.elasticsearch
- system.glusterfs.client.volume.mongodb
- system.glusterfs.client.volume.postgresql
- system.glusterfs.client.volume.pushkin
- system.glusterfs.client.volume.rundeck
- system.glusterfs.client.volume.security_monkey
# Docker services
- system.docker.swarm.stack.devops_portal
- system.docker.swarm.stack.elasticsearch
- system.docker.swarm.stack.janitor_monkey
- system.docker.swarm.stack.postgresql
- system.docker.swarm.stack.pushkin
- system.docker.swarm.stack.rundeck
- system.docker.swarm.stack.security_monkey
# Docker networks
- system.docker.swarm.network.runbook
# HAProxy
- system.haproxy.proxy.listen.oss.devops_portal
- system.haproxy.proxy.listen.oss.elasticsearch
- system.haproxy.proxy.listen.oss.janitor_monkey
- system.haproxy.proxy.listen.oss.mongodb
- system.haproxy.proxy.listen.oss.postgresql
- system.haproxy.proxy.listen.oss.pushkin
- system.haproxy.proxy.listen.oss.rundeck
- system.haproxy.proxy.listen.oss.security_monkey
# OSS tooling
- system.devops_portal.service.elasticsearch
- system.devops_portal.service.gerrit
- system.devops_portal.service.janitor_monkey
- system.devops_portal.service.jenkins
- system.devops_portal.service.pushkin
- system.devops_portal.service.rundeck
- system.devops_portal.service.security_monkey
# Rundeck
- system.rundeck.client.runbook
In the init.yml
file in the
/srv/salt/reclass/classes/cluster/${_param:cluster_name}/cicd/control/
directory, define the required parameters:
For the Runbook Automation service, define:
parameters:
_param:
rundeck_runbook_public_key: <SSH_PUBLIC_KEY>
rundeck_runbook_private_key: |
<SSH_PRIVATE_KEY>
For the Security Audit service, define:
parameters:
_param:
security_monkey_openstack:
username: <USERNAME>
password: <PASSWORD>
auth_url: <KEYSTONE_AUTH_ENDPOINT>
The configuration for the Security Audit service above will use the
Administrator account to access OpenStack with the admin tenant.
To configure the Security Audit service deployment for a specific
tenant, define the security_monkey_openstack
parameter as follows:
parameters:
_param:
security_monkey_openstack:
os_account_id: <OS_ACCOUNT_ID>
os_account_name: <OS_ACCOUNT_NAME>
username: <USERNAME>
password: <PASSWORD>
auth_url: <KEYSTONE_AUTH_ENDPOINT>
project_domain_name: <PROJ_DOMAIN_NAME>
project_name: <PROJ_NAME>
user_domain_name: <USER_DOMAIN_NAME>
Warning
The project_name: <PROJ_NAME>
parameter specifies
a project for the Keystone authentication in the Security
Audit service. Therefore, the service will not listen
by projects, but synchronize issues from all projects in
the current environment with the DevOps Portal using the
specified project to authenticate.
For the Janitor service, define:
parameters:
_param:
janitor_monkey_openstack:
username: <USERNAME>
password: <PASSWORD>
auth_url: <KEYSTONE_AUTH_ENDPOINT>
The configuration for the Janitor service above will use the
Administrator account to access OpenStack with the admin tenant.
To configure the Security Audit service deployment for a specific
tenant, define the janitor_monkey_openstack
parameter as follows:
parameters:
_param:
janitor_monkey_openstack:
username: <USERNAME>
password: <PASSWORD>
auth_url: <KEYSTONE_AUTH_ENDPOINT>
project_domain_name: <PROJ_DOMAIN_NAME>
project_name: <PROJ_NAME>
In the master.yml
file in the
/srv/salt/reclass/classes/cluster/${_param:cluster_name}/cicd/control/
directory, configure classes and parameters as required:
Define classes for the DevOps Portal and services as required:
classes:
# DevOps Portal
- service.devops_portal.config
# Elasticsearch
- system.elasticsearch.client
- system.elasticsearch.client.index.pushkin
- system.elasticsearch.client.index.janitor_monkey
# PostgreSQL
- system.postgresql.client.pushkin
- system.postgresql.client.rundeck
- system.postgresql.client.security_monkey
# Runbook Automation
- system.rundeck.server.docker
- system.rundeck.client
Define parameters for the Runbooks Automation service, if required:
parameters:
_param:
rundeck_db_user: ${_param:rundeck_postgresql_username}
rundeck_db_password: ${_param:rundeck_postgresql_password}
rundeck_db_host: ${_param:cluster_vip_address}
rundeck_postgresql_host: ${_param:cluster_vip_address}
rundeck_postgresql_port: ${_param:haproxy_postgresql_bind_port}
Push all changes of the model to the dedicated project repository.
Verify that the metadata of the Salt Master node contains all the required parameters:
reclass --nodeinfo=$SALT_MASTER_FQDN.$ENV_DOMAIN
salt '*' saltutil.refresh_pillar
salt '*' saltutil.sync_all
salt '$SALT_MASTER_FQDN.$ENV_DOMAIN' pillar.get devops_portal
For example, for the ci01
node on the cicd-lab-dev.local
domain run:
reclass --nodeinfo=ci01.cicd-lab-dev.local
salt '*' saltutil.refresh_pillar
salt '*' saltutil.sync_all
salt 'ci01.cicd-lab-dev.local' pillar.get devops_portal