Deploy OSS services manually

Deploy OSS services manuallyΒΆ

Warning

The DevOps Portal has been deprecated in the Q4`18 MCP release tagged with the 2019.2.0 Build ID.

Before you proceed with the services installation, verify that you have updated the Reclass model accordingly as described in Configure services in the Reclass model.

To deploy the DevOps portal:

  1. Log in to the Salt Master node.

  2. Refresh Salt pillars and synchronize Salt modules on all Salt Minion nodes:

    salt '*' saltutil.refresh_pillar
    salt '*' saltutil.sync_all
    
  3. Set up GlusterFS:

    salt -b 1 -C 'I@glusterfs:server' state.sls glusterfs.server
    

    Note

    The -b option specifies the explicit number of the Salt Minion nodes to apply the state at once to. Therefore, you will get a more stable configuration during the establishment of peers between the services.

  4. Mount the GlusterFS volume on Docker Swarm nodes:

    salt -C 'I@glusterfs:client' state.sls glusterfs.client
    
  5. Verify that the volume is mounted on Docker Swarm nodes:

    salt '*' cmd.run 'systemctl -a|grep "GlusterFS File System"|grep -v mounted'
    
  6. Configure HAProxy and Keepalived for the load balancing of incoming traffic:

    salt -C "I@haproxy:proxy" state.sls haproxy,keepalived
    
  7. Set up Docker Swarm:

    salt -C 'I@docker:host' state.sls docker.host
    salt -C 'I@docker:swarm:role:master' state.sls docker.swarm
    salt -C 'I@docker:swarm:role:master' state.sls salt
    salt -C 'I@docker:swarm:role:master' mine.flush
    salt -C 'I@docker:swarm:role:master' mine.update
    salt -C 'I@docker:swarm' state.sls docker.swarm
    salt -C 'I@docker:swarm:role:master' cmd.run 'docker node ls'
    
  8. Configure the OSS services:

    salt -C 'I@devops_portal:config' state.sls devops_portal.config
    salt -C 'I@rundeck:server' state.sls rundeck.server
    

    Note

    In addition to setting up the server side for the Runbook Automation service, the rundeck.server state configures users and API tokens.

  9. Prepare aptly before deployment:

    salt -C 'I@aptly:publisher' saltutil.refresh_pillar
    salt -C 'I@aptly:publisher' state.sls aptly.publisher
    
  10. Apply the docker.client state:

    salt -C 'I@docker:swarm:role:master' state.sls docker.client
    
  11. Prepare Jenkins for the deployment:

    salt -C 'I@docker:swarm' cmd.run 'mkdir -p /var/lib/jenkins'
    
  12. Identify the IP address on which HAProxy listens for stats:

    HAPROXY_STATS_IP=$(salt -C 'I@docker:swarm:role:master' \
           --out=newline_values_only \
           pillar.fetch haproxy:proxy:listen:stats:binds:address)
    

    Caution

    You will use the HAPROXY_STATS_IP variable to verify that the Docker-based service you are going to deploy is up in stats of the HAProxy service.

  13. Verify that aptly is UP in stats of the HAProxy service:

    curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep aptly
    
  14. Deploy aptly:

    salt -C 'I@aptly:server' state.sls aptly
    
  15. Verify that OpenLDAP is UP in stats of the HAProxy service:

    curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep openldap
    
  16. Deploy OpenLDAP:

    salt -C 'I@openldap:client' state.sls openldap
    
  17. Verify that Gerrit is UP in stats of the HAProxy service:

    curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep gerrit
    
  18. Deploy Gerrit:

    salt -C 'I@gerrit:client' state.sls gerrit
    

    Note

    The execution of the command above may hang for some time. If it happens, re-apply the state after its termination.

  19. Verify that Jenkins is UP in stats of the HAProxy service:

    curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep jenkins
    
  20. Deploy Jenkins:

    salt -C 'I@jenkins:client' state.sls jenkins
    

    Note

    The execution of the command above may hang for some time. If it happens, re-apply the state after its termination.

  21. Verify that the process of bootstrapping of the PostgreSQL container has been finalized:

    docker service logs postgresql_db | grep "ready to accept"
    
  22. Verify that PostgreSQL is UP in stats of the HAProxy service:

    curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep postgresql
    
  23. Initialize OSS services databases by setting up the PostgreSQL client:

    salt -C 'I@postgresql:client' state.sls postgresql.client
    

    The postgresql.client state application will return errors due to cross-dependencies between the docker.stack and postgresql.client states. To configure integration between Push Notification and Security Audit services:

    1. Verify that Push Notification service is UP in stats of the HAProxy service:

      curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep pushkin
      
    2. Re-apply the postgresql.client state:

      salt -C 'I@postgresql:client' state.sls postgresql.client
      
  24. Verify that Runbook Automation is UP in stats of the HAProxy service:

    curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep rundeck
    
  25. Deploy Runbook Automation:

    salt -C 'I@rundeck:client' state.sls rundeck.client
    
  26. Verify that Elasticksearch is UP in stats of the HAProxy service:

    curl -s "http://${HAPROXY_STATS_IP}:9600/haproxy?stats;csv" | grep elasticsearch
    
  27. Deploy the Elasticsearch back end:

    salt -C 'I@elasticsearch:client' state.sls elasticsearch.client
    

    Due to index creation, you may need to re-apply the state above.

  28. If required, generate documentation and set up proxy to access it. The generated content will reflect the current configuration of the deployed environment:

    salt -C  'I@sphinx:server' state.sls 'sphinx'
    # Execute 'salt-run' on salt-master
    salt-run state.orchestrate sphinx.orch.generate_doc || echo "Command execution failed"
    salt -C 'I@nginx:server' state.sls 'nginx'