Configure log rotation using logrotate

Configure log rotation using logrotate


This feature is available starting from the MCP 2019.2.5 maintenance update. Before enabling the feature, follow the steps described in Apply maintenance updates.

This section instructs you on how to configure log rotation for selected services using the logrotate utility.

The services that support the log rotation configuration include:

  • OpenStack services

    Aodh, Barbican, Ceilometer, Cinder, Designate, Glance, Gnocchi, Heat, Keystone, Neutron, Nova, Octavia, Ironic

  • Other services

    atop, Backupninja, Ceph, Elasticsearch, Galera (MySQL), GlusterFS, HAProxy, libvirt, MAAS, MongoDB, NGINX, Open vSwitch, PostgreSQL, RabbitMQ, Redis, Salt, Telegraf

MCP supports configuration of the rotation interval and number of rotations. Configuration of other logrotate options, postrotate and prerotate actions, and so on, are not supported.

To configure log rotation:

  1. Log in to the Salt Master node.

  2. Open the cluster level of your deployment model.

  3. Configure the interval and rotate parameters for the target service as required:

    • logrotate:interval

      Define the rotation time interval. Available values daily, weekly, monthly, and yearly.

    • logrotate:rotate

      Define the number of the rotated logs to be kept. The parameter expects the interger value.

    Use the Logrotate configuration table below to determine where to add the log rotation configuration.

    Logrotate configuration
    Service Pillar path (target) File path
    Aodh aodh:server openstack/telemetry.yml
    atop linux:system:atop The root file of the component [7]
    Backupninja backupninja:client infra/backup/client_common.yml
    Barbican barbican:server openstack/barbican.yml
    Ceilometer server [0] ceilometer:server openstack/telemetry.yml
    Ceilometer agent [0] ceilometer:agent openstack/compute/init.yml
    Ceph ceph:common ceph/common.yml
    Cinder controller [1] cinder:controller openstack/control.yml
    Cinder volume [1] cinder:volume openstack/control.yml
    Designate designate:server openstack/control.yml
    Elasticsearch server elasticsearch:server stacklight/log.yml
    Elasticsearch client elasticsearch:client stacklight/log.yml
    Galera (MySQL) master galera:master openstack/database/master.yml
    Galera (MySQL) slave galera:slave openstack/database/slave.yml
    Glance glance:server openstack/control.yml
    GlusterFS server [2] glusterfs:server The root file of the component [7]
    GlusterFS client [2] glusterfs:client The root file of the component [7]
    Gnocchi server gnocchi:server openstack/telemetry.yml
    Gnocchi client gnocchi:client openstack/control/init.yml
    HAProxy haproxy:proxy openstack/proxy.yml
    Heat heat:server openstack/control.yml
    Ironic Available since 2019.2.13 ironic:api openstack/baremetal.yml
    Keystone server keystone:server openstack/control.yml
    Keystone client keystone:client openstack/control/init.yml
    libvirt nova:compute:libvirt [3] openstack/compute/init.yml
    MAAS maas:region infra/maas.yml
    MongoDB mongodb:server stacklight/server.yml
    Neutron server neutron:server openstack/control.yml
    Neutron client neutron:client openstack/control/init.yml
    Neutron gateway neutron:gateway openstack/gateway.yml
    Neutron compute neutron:compute openstack/compute/init.yml
    NGINX nginx:server openstack/proxy.yml, stacklight/proxy.yml
    Nova controller nova:controller openstack/control.yml
    Nova compute nova:compute openstack/compute/init.yml
    Octavia manager [4] octavia:manager openstack/octavia_manager.yml
    Octavia client [4] octavia:client openstack/control.yml
    Open vSwitch linux:network:openvswitch infra/init.yml
    PostgreSQL server postgresql:server (maas:region) [5] infra/config/postgresql.yml (infra/maas.yml)
    PostgreSQL client postgresql:client (maas:region) [5] infra/config/postgresql.yml (infra/maas.yml)
    RabbitMQ rabbitmq:server openstack/message_queue.yml
    Redis redis:server openstack/telemetry.yml
    Salt master [6] salt:master infra/config/init.yml
    Salt minion [6] salt:minion The root file of the component [7]
    Telegraf telegraf:agent infra/init.yml, stacklight/server.yml
    [0](1, 2)

    If Ceilometer server and agent are specified on the same node, the server configuration is prioritized.

    [1](1, 2)

    If Cinder controller and volume are specified on the same node, the controller configuration is prioritized.

    [2](1, 2)

    If GlusterFS server and client are specified on the same node, the server configuration is prioritized.


    Use nova:compute:libvirt as pillar path, but only nova:compute as target.

    [4](1, 2)

    If Octavia manager and client are specified on the same node, the manager configuration is prioritized.

    [5](1, 2)

    PostgreSQL is the dependenсу of MAAS. Configure PostgreSQL from the MAAS pillar only if the service has been installed as a dependency without the postgresql pillar defined. If the postgresql pillar is defined, configure it instead.

    [6](1, 2)

    If the Salt Master and minion are specified on the same node, the master configuration is prioritized.

    [7](1, 2, 3, 4)

    Depending on the nodes where you want to change the configuration, select their components’ root file. For example, infra/init.yml, openstack/control/init.yml, cicd/init.yml, and so on.

    For example, to set log rotation for Aodh to keep logs for the last 4 weeks with the daily rotation interval, add the following configuration to cluster/<cluster_name>/openstack/telemetry.yml:

            interval: daily
            rotate: 28
  4. Apply the logrotate state on the node with the target service:

    salt -C 'I@<target>' saltutil.sync_all
    salt -C 'I@<target>' state.sls logrotate

    For example:

    salt -C 'I@aodh:server' state.sls logrotate