Enable log forwarding to external destinations

Available since 2.23.0 and 2.23.1 for MOSK 23.1

By default, StackLight sends logs to OpenSearch. However, you can configure StackLight to add external Elasticsearch, OpenSearch, and syslog destinations as the fluentd-logs output. In this case, StackLight will send logs both to an external server(s) and OpenSearch.

Since Cluster releases 17.0.0, 16.0.0, and 14.1.0, you can also enable sending of Container Cloud service logs to Splunk using the syslog external output configuration. The feature is available in the Technology Preview scope.

Warning

Sending logs to Splunk implies that the target Splunk instance is available from the Container Cloud cluster. If proxy is enabled, the feature is not supported.

Prior to enabling the functionality, complete the following prerequisites:

  • Enable StackLight logging

  • Deploy an external server outside Container Cloud

  • Make sure that Container Cloud proxy is not enabled since it only supports the HTTP(S) traffic

  • For Splunk, configure the server to accept logs:

    • Create an index and set its type to Event

    • Configure data input:

      • Open the required port

      • Configure the required protocol (TCP/UDP)

      • Configure connection to the created index

To enable log forwarding to external destinations:

  1. Perform the steps 1-2 described in Configure StackLight.

  2. In the stacklight.values section of the opened manifest, configure the logging.externalOutputs parameters using the following table.

    Key

    Description

    Example values

    disabled (bool)

    Optional. Disables the output destination using disabled: true. If not set, defaults to disabled: false.

    true or false

    type (string)

    Required. Specifies the type of log destination. The following values are accepted: elasticsearch, opensearch, remote_syslog.

    remote_syslog

    level (string)

    Optional. Sets the least important level of log messages to send. For example, values that are defined using the severity_label field, see the logging.level description in Logging.

    warning

    plugin_log_level (string)

    Optional. Defaults to info. Sets the value of @log_level of the output plugin for a particular back end. For other available values, refer to the logging.level description in Logging.

    notice

    tag_exclude (string)

    Optional. Overrides tag_include. Sets logs by tags to exclude from the destination output. For example, to exclude all logs with the test tag, set tag_exclude: '/.*test.*/'.

    How to obtain tags for logs

    Select from the following options:

    • In the main OpenSearch output, use the logger field that equals the tag.

    • Use logs of a particular Pod or container by following the below order, with the first match winning:

      1. The value of the app Pod label. For example, for app=opensearch-master, use opensearch-master as the log tag.

      2. The value of the k8s-app Pod label.

      3. The value of the app.kubernetes.io/name Pod label.

      4. If a release_group Pod label exists and the component Pod label starts with app, use the value of the component label as the tag. Otherwise, the tag is the application label joined to the component label with a -.

      5. The name of the container from which the log is taken.

    The values for tag_exclude and tag_include are placed into <match> directives of Fluentd and only accept regex types that are supported by the <match> directive of Fluentd. For details, refer to the Fluentd official documentation.

    '{fluentd-logs,systemd}'

    tag_include (string)

    Optional. Is overridden by tag_exclude. Sets logs by tags to include to the destination output. For example, to include all logs with the auth tag, set tag_include: '/.*auth.*/'.

    '/.*auth.*/'

    <pluginConfigOptions> (map)

    Configures plugin settings. Has a hierarchical structure. The first-level configuration parameters are dynamic except type, id, and log_level that are reserved by StackLight. For available options, refer to the required plugin documentation. Mirantis does not set any default values for plugin configuration settings except the reserved ones.

    The second-level configuration options are predefined and limited to buffer (for any type of log destination) and format (for remote_syslog only). Inside the second-level configuration, the parameters are dynamic.

    For available configuration options, refer to the following documentation:

    First-level configuration options:

    elasticsearch:
      ...
      tag_exclude: '{fluentd-logs,systemd}'
      host: elasticsearch-host
      port: 9200
      logstash_date_format: '%Y.%m.%d'
      logstash_format: true
      logstash_prefix: logstash
      ...
    

    Second-level configuration options:

    syslog:
      format:
        "@type": single_value
        message_key: message
    

    buffer (map)

    Configures buffering of events using the second-level configuration options. Applies to any type of log destinations. Parameters are dynamic except the following mandatory ones that should not be modified:

    • type: file that sets the default buffer type

    • path: <pathToBufferFile> that sets the path to the buffer destination file

    • overflow_action: block that prevents Fluentd from crashing if the output destination is down

    For details about other mandatory and optional buffer parameters, see the Fluentd: Output Plugins documentation.

    Note

    To disable buffer without deleting it, use buffer.disabled: true.

    buffer:
      # disabled: false
      chunk_limit_size: 16m
      flush_interval: 15s
      flush_mode: interval
      overflow_action: block
    
    Example configuration for logging.externalOutputs
    logging:
      externalOutputs:
        elasticsearch:
          # disabled: false
          type: elasticsearch
          level: info
          plugin_log_level: info
          tag_exclude: '{fluentd-logs,systemd}'
          host: elasticsearch-host
          port: 9200
          logstash_date_format: '%Y.%m.%d'
          logstash_format: true
          logstash_prefix: logstash
          ...
          buffer:
            # disabled: false
            chunk_limit_size: 16m
            flush_interval: 15s
            flush_mode: interval
            overflow_action: block
            ...
      opensearch:
        disabled: true
        type: opensearch
        level: info
        plugin_log_level: info
        tag_include: '/.*auth.*/'
        host: opensearch-host
        port: 9200
        logstash_date_format: '%Y.%m.%d'
        logstash_format: true
        logstash_prefix: logstash
        ...
        buffer:
          chunk_limit_size: 16m
          flush_interval: 15s
          flush_mode: interval
          overflow_action: block
          ...
      syslog:
        type: remote_syslog
        plugin_log_level: info
        level: info
        tag_include: '{iam-proxy,systemd}'
        host: remote-syslog.svc
        port: 514
        hostname: example-hostname
        packetSize: 1024
        protocol: udp
        tls: false
        buffer:
          disabled: true
        format:
          "@type": single_value
          message_key: message
          ...
      splunk_syslog_output:
        type: remote_syslog
        host: remote-splunk-syslog.svc
        port: 514
        protocol: tcp
        tls: true
        ca_file: /etc/ssl/certs/splunk-syslog.pem
        verify_mode: 0
        buffer:
          chunk_limit: 16MB
          total_limit: 128MB
    externalOutputSecretMounts:
    - secretName: syslog-pem
      mountPath: /etc/ssl/certs/splunk-syslog.pem
    

    Note

    • Mirantis recommends that you tune the packetSize parameter value to allow sending full log lines.

    • The hostname field in the remote syslog database will be set based on clusterId specified in the StackLight chart values. For example, if clusterId is ns/cluster/example-uid, the hostname will transform to ns_cluster_example-uid. For details, see clusterId in StackLight configuration parameters.

  3. Optional. Mount authentication secrets for the required external destination to Fluentd using logging.externalOutputSecretMounts. For the parameter options, see Secrets for external log outputs.

    Example command to create a secret:

    kubectl -n stacklight create secret generic elasticsearch-certs \
      --from-file=./ca.pem \
      --from-file=./client.pem \
      --from-file=./client.key
    
  4. Recommended. Increase the CPU limit for the fluentd-logs DaemonSet by 50% of the original value per each external output.

    The following table describes default and recommended limits for the fluentd-logs DaemonSet per external destination on clusters of different sizes:

    CPU limits for fluentd-logs per external output

    Cluster size

    Default CPU limit

    Recommended CPU limit

    Small

    1000m

    1500m

    Medium

    1500m

    2250m

    Large

    2000m

    3000m

    To increase the CPU limit for fluentd-logs, configure the resourcesPerClusterSize StackLight parameter. For details, see Configure StackLight and Resource limits.

  5. Verify remote logging to syslog as described in Verify StackLight after configuration.

Note

If Fluentd cannot flush logs and the buffer of the external output starts to fill depending on resources and configuration of the external Elasticsearch or OpenSearch server, the Data too large, circuit_breaking_exception error may occur even after you resolve the external output issues.

This error indicates that the output destination cannot accept logs data sent in bulk because of their size. To mitigate the issue, select from the following options:

  • Set bulk_message_request_threshold to 10MB or lower. It is unlimited by default. For details, see the Fluentd plugin documentation for Elasticsearch.

  • Adjust output destinations to accept a large amount of data at once. For details, refer to the official documentation of the required external system.