Enable log forwarding to external destinations¶
Available since 2.23.0 and 2.23.1 for MOSK 23.1
By default, StackLight sends logs to OpenSearch. However, you can
configure StackLight to add external Elasticsearch, OpenSearch, and syslog
destinations as the fluentd-logs
output. In this case, StackLight will
send logs both to an external server(s) and OpenSearch.
Since Cluster releases 17.0.0, 16.0.0, and 14.1.0, you can also enable sending of Container Cloud service logs to Splunk using the syslog external output configuration. The feature is available in the Technology Preview scope.
Warning
Sending logs to Splunk implies that the target Splunk instance is available from the Container Cloud cluster. If proxy is enabled, the feature is not supported.
Prior to enabling the functionality, complete the following prerequisites:
Enable StackLight logging
Deploy an external server outside Container Cloud
Make sure that Container Cloud proxy is not enabled since it only supports the HTTP(S) traffic
For Splunk, configure the server to accept logs:
Create an index and set its type to
Event
Configure data input:
Open the required port
Configure the required protocol (TCP/UDP)
Configure connection to the created index
To enable log forwarding to external destinations:
Perform the steps 1-2 described in Configure StackLight.
In the
stacklight.values
section of the opened manifest, configure thelogging.externalOutputs
parameters using the following table.Key
Description
Example values
disabled
(bool)Optional. Disables the output destination using
disabled: true
. If not set, defaults todisabled: false
.true
orfalse
type
(string)Required. Specifies the type of log destination. The following values are accepted:
elasticsearch
,opensearch
,remote_syslog
, andopensearch_data_stream
(since Container Cloud 2.26.0, Cluster releases 17.1.0 and 16.1.0).remote_syslog
level
(string)Removed in 2.26.0 (17.1.0, 16.1.0)Optional. Sets the least important level of log messages to send. For example, values that are defined using the
severity_label
field, see thelogging.level
description in Logging.warning
plugin_log_level
(string)Optional. Defaults to
info
. Sets the value of@log_level
of the output plugin for a particular backend. For other available values, refer to thelogging.level
description in Logging.notice
tag_exclude
(string)Optional. Overrides
tag_include
. Sets logs by tags to exclude from the destination output. For example, to exclude all logs with thetest
tag, settag_exclude: '/.*test.*/'
.How to obtain tags for logs
Select from the following options:
In the main OpenSearch output, use the
logger
field that equals the tag.Use logs of a particular Pod or container by following the below order, with the first match winning:
The value of the
app
Pod label. For example, forapp=opensearch-master
, useopensearch-master
as the log tag.The value of the
k8s-app
Pod label.The value of the
app.kubernetes.io/name
Pod label.If a
release_group
Pod label exists and the component Pod label starts withapp
, use the value of the component label as the tag. Otherwise, the tag is the application label joined to the component label with a-
.The name of the container from which the log is taken.
The values for
tag_exclude
andtag_include
are placed into<match>
directives of Fluentd and only accept regex types that are supported by the<match>
directive of Fluentd. For details, refer to the Fluentd official documentation.'{fluentd-logs,systemd}'
tag_include
(string)Optional. Is overridden by
tag_exclude
. Sets logs by tags to include to the destination output. For example, to include all logs with theauth
tag, settag_include: '/.*auth.*/'
.'/.*auth.*/'
<pluginConfigOptions>
(map)Configures plugin settings. Has a hierarchical structure. The first-level configuration parameters are dynamic except
type
,id
, andlog_level
that are reserved by StackLight. For available options, refer to the required plugin documentation. Mirantis does not set any default values for plugin configuration settings except the reserved ones.The second-level configuration options are predefined and limited to
buffer
(for any type of log destination) andformat
(forremote_syslog
only). Inside the second-level configuration, the parameters are dynamic.For available configuration options, refer to the following documentation:
First-level configuration options:
elasticsearch: ... tag_exclude: '{fluentd-logs,systemd}' host: elasticsearch-host port: 9200 logstash_date_format: '%Y.%m.%d' logstash_format: true logstash_prefix: logstash ...
Second-level configuration options:
syslog: format: "@type": single_value message_key: message
buffer
(map)Configures buffering of events using the second-level configuration options. Applies to any type of log destinations. Parameters are dynamic except the following mandatory ones that should not be modified:
type: file
that sets the default buffer typepath: <pathToBufferFile>
that sets the path to the buffer destination fileoverflow_action: block
that prevents Fluentd from crashing if the output destination is down
For details about other mandatory and optional
buffer
parameters, see the Fluentd: Output Plugins documentation.Note
To disable
buffer
without deleting it, usebuffer.disabled: true
.buffer: # disabled: false chunk_limit_size: 16m flush_interval: 15s flush_mode: interval overflow_action: block
output_kind
(string)Since 2.26.0 (17.1.0, 16.1.0)Configures the type of logs to forward. If set to
audit
, only audit logs are forwarded. If unset, only system logs are forwarded.opensearch: output_kind: audit
Example configuration for logging.externalOutputs
logging: externalOutputs: elasticsearch: # disabled: false type: elasticsearch level: info # Removed in 2.26.0 (17.1.0, 16.1.0) plugin_log_level: info tag_exclude: '{fluentd-logs,systemd}' host: elasticsearch-host port: 9200 logstash_date_format: '%Y.%m.%d' logstash_format: true logstash_prefix: logstash ... buffer: # disabled: false chunk_limit_size: 16m flush_interval: 15s flush_mode: interval overflow_action: block ... opensearch: disabled: true type: opensearch level: info # Removed in 2.26.0 (17.1.0, 16.1.0) plugin_log_level: info tag_include: '/.*auth.*/' host: opensearch-host port: 9200 logstash_date_format: '%Y.%m.%d' logstash_format: true logstash_prefix: logstash output_kind: audit # Since 2.26.0 (17.1.0, 16.1.0) ... buffer: chunk_limit_size: 16m flush_interval: 15s flush_mode: interval overflow_action: block ... syslog: type: remote_syslog plugin_log_level: info level: info # Removed in 2.26.0 (17.1.0, 16.1.0) tag_include: '{iam-proxy,systemd}' host: remote-syslog.svc port: 514 hostname: example-hostname packetSize: 1024 protocol: udp tls: false buffer: disabled: true format: "@type": single_value message_key: message ... splunk_syslog_output: type: remote_syslog host: remote-splunk-syslog.svc port: 514 protocol: tcp tls: true ca_file: /etc/ssl/certs/splunk-syslog.pem verify_mode: 0 buffer: chunk_limit: 16MB total_limit: 128MB externalOutputSecretMounts: - secretName: syslog-pem mountPath: /etc/ssl/certs/splunk-syslog.pem
Note
Mirantis recommends that you tune the
packetSize
parameter value to allow sending full log lines.The
hostname
field in the remote syslog database will be set based onclusterId
specified in the StackLight chart values. For example, ifclusterId
isns/cluster/example-uid
, thehostname
will transform tons_cluster_example-uid
. For details, seeclusterId
in StackLight configuration parameters.
Optional. Mount authentication secrets for the required external destination to Fluentd using
logging.externalOutputSecretMounts
. For the parameter options, see Secrets for external log outputs.Example command to create a secret:
kubectl -n stacklight create secret generic elasticsearch-certs \ --from-file=./ca.pem \ --from-file=./client.pem \ --from-file=./client.key
Recommended. Increase the CPU limit for the
fluentd-logs
DaemonSet by 50% of the original value per each external output.The following table describes default and recommended limits for the
fluentd-logs
DaemonSet per external destination on clusters of different sizes:¶ Cluster size
Default CPU limit
Recommended CPU limit
Small
1000m
1500m
Medium
1500m
2250m
Large
2000m
3000m
To increase the CPU limit for
fluentd-logs
, configure theresourcesPerClusterSize
StackLight parameter. For details, see Configure StackLight and Resource limits.Verify remote logging to syslog as described in Verify StackLight after configuration.
Note
If Fluentd cannot flush logs and the buffer of the external output
starts to fill depending on resources and configuration of the external
Elasticsearch or OpenSearch server, the
Data too large, circuit_breaking_exception
error may occur even after
you resolve the external output issues.
This error indicates that the output destination cannot accept logs data sent in bulk because of their size. To mitigate the issue, select from the following options:
Set
bulk_message_request_threshold
to10MB
or lower. It is unlimited by default. For details, see the Fluentd plugin documentation for Elasticsearch.Adjust output destinations to accept a large amount of data at once. For details, refer to the official documentation of the required external system.