Warning
The DevOps Portal has been deprecated in the Q4`18 MCP release tagged with the 2019.2.0 Build ID.
Before you proceed with the services installation, verify that you have updated the Reclass model accordingly as described in Configure services in the Reclass model.
To deploy the DevOps portal:
Log in to the Salt Master node.
Refresh Salt pillars and synchronize Salt modules on all Salt Minion nodes:
salt '*' saltutil.refresh_pillar
salt '*' saltutil.sync_all
Set up GlusterFS:
salt -b 1 -C 'I@glusterfs:server' state.sls glusterfs.server
Note
The -b
option specifies the explicit number of the Salt
Minion nodes to apply the state at once to. Therefore, you will
get a more stable configuration during the establishment of peers
between the services.
Mount the GlusterFS volume on Docker Swarm nodes:
salt -C 'I@glusterfs:client' state.sls glusterfs.client
Verify that the volume is mounted on Docker Swarm nodes:
salt '*' cmd.run 'systemctl -a|grep "GlusterFS File System"|grep -v mounted'
Configure HAProxy and Keepalived for the load balancing of incoming traffic:
salt -C "I@haproxy:proxy" state.sls haproxy,keepalived
Set up Docker Swarm:
salt -C 'I@docker:host' state.sls docker.host
salt -C 'I@docker:swarm:role:master' state.sls docker.swarm
salt -C 'I@docker:swarm:role:master' state.sls salt
salt -C 'I@docker:swarm:role:master' mine.flush
salt -C 'I@docker:swarm:role:master' mine.update
salt -C 'I@docker:swarm' state.sls docker.swarm
salt -C 'I@docker:swarm:role:master' cmd.run 'docker node ls'
Configure the OSS services:
salt -C 'I@devops_portal:config' state.sls devops_portal.config
salt -C 'I@rundeck:server' state.sls rundeck.server
Note
In addition to setting up the server side for the Runbook Automation service, the rundeck.server state configures users and API tokens.
Prepare aptly before deployment:
salt -C 'I@aptly:publisher' saltutil.refresh_pillar
salt -C 'I@aptly:publisher' state.sls aptly.publisher
Apply the docker.client state:
salt -C 'I@docker:swarm:role:master' state.sls docker.client
Prepare Jenkins for the deployment:
salt -C 'I@docker:swarm' cmd.run 'mkdir -p /var/lib/jenkins'
Identify the IP address on which HAProxy listens for stats:
HAPROXY_STATS_IP=$(salt -C 'I@docker:swarm:role:master' \
--out=newline_values_only \
pillar.fetch haproxy:proxy:listen:stats:binds:address)
Caution
You will use the HAPROXY_STATS_IP
variable to verify that
the Docker-based service you are going to deploy is up in
stats of the HAProxy service.
Verify that aptly is UP
in stats of the HAProxy service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep aptly
Deploy aptly:
salt -C 'I@aptly:server' state.sls aptly
Verify that OpenLDAP is UP
in stats of the HAProxy service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep openldap
Deploy OpenLDAP:
salt -C 'I@openldap:client' state.sls openldap
Verify that Gerrit is UP
in stats of the HAProxy service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep gerrit
Deploy Gerrit:
salt -C 'I@gerrit:client' state.sls gerrit
Note
The execution of the command above may hang for some time. If it happens, re-apply the state after its termination.
Verify that Jenkins is UP
in stats of the HAProxy service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep jenkins
Deploy Jenkins:
salt -C 'I@jenkins:client' state.sls jenkins
Note
The execution of the command above may hang for some time. If it happens, re-apply the state after its termination.
Verify that the process of bootstrapping of the PostgreSQL container has been finalized:
docker service logs postgresql_db | grep "ready to accept"
Verify that PostgreSQL is UP
in stats of the HAProxy service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep postgresql
Initialize OSS services databases by setting up the PostgreSQL client:
salt -C 'I@postgresql:client' state.sls postgresql.client
The postgresql.client state application will return errors due to cross-dependencies between the docker.stack and postgresql.client states. To configure integration between Push Notification and Security Audit services:
Verify that Push Notification service is UP
in stats of the HAProxy
service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep pushkin
Re-apply the postgresql.client state:
salt -C 'I@postgresql:client' state.sls postgresql.client
Verify that Runbook Automation is UP
in stats of the HAProxy service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep rundeck
Deploy Runbook Automation:
salt -C 'I@rundeck:client' state.sls rundeck.client
Verify that Elasticksearch is UP
in stats of the HAProxy service:
curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep elasticsearch
Deploy the Elasticsearch back end:
salt -C 'I@elasticsearch:client' state.sls elasticsearch.client
Due to index creation, you may need to re-apply the state above.
If required, generate documentation and set up proxy to access it. The generated content will reflect the current configuration of the deployed environment:
salt -C 'I@sphinx:server' state.sls 'sphinx'
# Execute 'salt-run' on salt-master
salt-run state.orchestrate sphinx.orch.generate_doc || echo "Command execution failed"
salt -C 'I@nginx:server' state.sls 'nginx'