Note
This feature is available starting from the MCP 2019.2.4 maintenance update. Before enabling the feature, follow the steps described in Apply maintenance updates.
You can configure Fluentd running on the RabbitMQ nodes to forward the Cloud Auditing Data Federation (CADF) events to specific external security information and event management (SIEM) systems, such as Splunk, ArcSight, or QRadar. The procedure below provides a configuration example for Splunk.
To enable sending CADF events to Splunk:
Open your project Git repository with Reclass model on the cluster level.
In classes/cluster/cluster_name/stacklight
, create a custom notification
channel, for example, fluentd_splunk.yml
with the following pillar
specifying the hosts and ports in the splunk_output
and
syslog_output
parameters:
parameters:
fluentd:
agent:
config:
label:
audit_messages:
filter:
get_payload_values:
tag: audit
type: record_transformer
enable_ruby: true
record:
- name: Logger
value: ${fluentd:dollar}{ record.dig("publisher_id") }
- name: Severity
value: ${fluentd:dollar}{ {'TRACE'=>7,'DEBUG'=>7,'INFO'=>6,\
'AUDIT'=>6,'WARNING'=>4,'ERROR'=>3,'CRITICAL'=>2}\
[record['priority']].to_i }
- name: Timestamp
value: ${fluentd:dollar}{ DateTime.strptime(record.dig\
("payload", "eventTime"), "%Y-%m-%dT%H:%M:%S.%N%z").strftime\
("%Y-%m-%dT%H:%M:%S.%3NZ") }
- name: notification_type
value: ${fluentd:dollar}{ record.dig("event_type") }
- name: severity_label
value: ${fluentd:dollar}{ record.dig("priority") }
- name: environment_label
value: ${_param:cluster_domain}
- name: action
value: ${fluentd:dollar}{ record.dig("payload", "action") }
- name: event_type
value: ${fluentd:dollar}{ record.dig("payload", "eventType") }
- name: outcome
value: ${fluentd:dollar}{ record.dig("payload", "outcome") }
pack_payload_to_json:
tag: audit
require:
- get_payload_values
type: record_transformer
enable_ruby: true
remove_keys: '["payload", "timestamp", "publisher_id", "priority"]'
record:
- name: Payload
value: ${fluentd:dollar}{ record["payload"].to_json }
match:
send_to_default:
tag: "**"
type: copy
store:
- type: relabel
label: splunk_output
- type: relabel
label: syslog_output
splunk_output:
match:
splunk_output:
tag: "**"
type: splunk_hec
host: <splunk_host>
port: <splunk_port>
token: <splunk_token>
syslog_output:
match:
syslog_output:
tag: "**"
type: syslog
host: <syslog_host>
port: <syslog_port>
In openstack/message_queue.yml
:
Replace the system.fluentd.notifications
class with the following
ones:
classes:
- system.fluentd.label.notifications.input_rabbitmq
- system.fluentd.label.notifications.notifications
Add the custom Fluentd channel as required. For example:
cluster.<cluster_name>.stacklight.fluentd_splunk
Log in to the Salt Master node.
Apply the fluentd
state on the msg
nodes:
salt -C 'I@rabbitmq:server' state.sls fluentd