The Insecure Wire

a network engineers perspective

ELK Stack with Palo Alto Firewall – Using Curator to clear indexes

I recently deployed an ELK stack (Elasticsearch, Logstash, Kibana) VM as logger for a Palo Alto Networks firewall. ELK is open source and allows you to create beautiful dashboards in Kibana.
I followed the following guide for integrating PAN firewall with ELK palo-alto-elasticstack-viz.

Overview Dashboard
Threat Dashboard

The issue I was having is that Elastic indexes would continue to grow and the VM would eventually run out of disk. To solve this problem I did the following:

1. Change to daily indexes, base on date stamp. Edit logstash config like so (this edit follows on from the above PAN-OS.conf logstash configuration file):

output {
if "PAN-OS_Traffic" in [tags] {
elasticsearch {
index => "panos-traffic-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
user => "elastic"
password => "yourpassword"
}
}
else if "PAN-OS_Threat" in [tags] {
elasticsearch {
index => "panos-threat-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
user => "elastic"
password => "yourpassword"
}
}
else if "PAN-OS_Config" in [tags] {
elasticsearch {
index => "panos-config-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
user => "elastic"
password => "yourpassword"
}
}
else if "PAN-OS_System" in [tags] {
elasticsearch {
index => "panos-system-%{+YYYY.MM.dd}"
hosts => ["localhost:9200"]
user => "elastic"
password => "yourpassword"
}
}
}

Logstash will now create an index based on date stamp for the firewall log inputs.
2. Use Elastic Curator cli tool to create a shell script and run it with crontab:
Create /etc/curator/config.yml

client:
hosts:
- 127.0.0.1
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth: elastic:yourpassword
timeout: 30
master_only: False

logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['elasticsearch', 'urllib3']

Create /etc/curator/delete-after.yml
Set unit_count to the number of days to keep indexes. In my example anything older than 60 days gets deleted.

actions:
1:
action: delete_indices
description: >-
Delete indices older than X days (based on index name), for panos-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: panos-
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 60

Create /etc/curator/cleanup.sh and paste in:

#!/bin/bash
# This df command grabs the free space of the root '/'.
disk=$(df -H | grep -vE '^Mounted| /.' | awk '{ print $1 " " $5 " " $6 }' | awk 'NR == 2' | awk '{print $2}' |sed 's/%//')

# Delete indices older than 60 days.
curator --config /etc/curator/config.yml /etc/curator/delete-after.yml
echo $disk

Now add to crontab – to run the script 5 mins past midnight every night:

sudo crontab -e
5 0 * * * /etc/curator/cleanup.sh

That it! You can tweak the unit_count days if you want to have say only 7 days worth of logs depending on your use case. You can also run curator manually like so:

sudo curator --config /etc/curator/config.yml /etc/curator/delete-after.yml

This helps when debugging your script logic and checking that elastic is actually deleting indices.