Sending OPNSense Syslog, Suricata, and Firewall logs into CRIBL Stream with GEO IP Tagging with log source splitting control before sending to SIEMs
OPNSense is a great open source firewall but it’s not the most supported in some cases when it comes to sending it’s logs into SIEMs. In this case, we will be sending the data into CRIBL to standardize and send off to our SIEMS such as Sentinel and Splunk.
OPNSense
One of the first things to do is configure our remote logging:
Create your rules:
We did one specifically for the firewall, Suricata and all the others. The reason for breaking these up is it allows for more control of sources incoming. For example, we are going to bring in the firewall rules, parse them in pre and then tag with GEO IP in the pipeline before sending off. All the other rules for now are going to be sent off to the SIEMs directly before we revisit them to tune out high fires and parsing.
Another thing to keep in mind is you might need to adjust your firewall rules depending on your setup. Just something to remember. Firewall rules always find a way to get you ;)
CRIBL Stream
The next part is we want to create our listeners so we can send in our logs.
Create new source
Example:
Our Configuration
Firewall Logs:
All OPNSense Logs:
Suricata Logs:
Pipelines
We created a pipeline sticking to our format in the naming style. We did pre_logtypehere so we know this is our pre processing rule. I like to do all my parsing in the pre before we get the logs.
pre_clean-opnsense-firewall
We are going to use Grok formatting first to match the format and get the timestamp.
Next we will use a parser function to extract all the data by delimited values.
The next part we filter based on TCP or UDP as their headers are different so the data coming in will be in different locations with less or more data.
Finally we finish up and close out the parsing and remove fields we don’t need to reduce the size
pre_clean-opnsense-surcata
These are nested JSON so we create a series of JSON parsers so that we get a flat result and remove all that we don’t want.
领英推荐
pipeline_geotagging
We attach the GEO IP Database which we pulled down from our GitHub which we update daily and we flatten the results. The reason for this is so we can remove all the geoip_ fields but the ones with English in them and the ones we want. This cuts down log ingestion costs due to the size the text can increase when you have so many event logs.
You can read up more here?https://docs.cribl.io/stream/geoip-function/
Finally, we create our firewall route with our parsed data and we start to GEO IP tag the data.
Something else you may wish to do here as well, is tag your LABELS. You can pull the names from the firewall and attach to the ID in CRIBL so that when the logs are viewed, you can quickly see what the name of the rule is as well.
You can read up more here?https://docs.cribl.io/stream/lookup-function
And now we have the following sources built out taking in logs and being sent to their destinations.
We connect our sources:
Looking in Sentinel, we can see our outcomes:
Suricata:
Firewall:
—
NOTE: Will be creating some CRIBL PACKS that you can download and deploy that come with all the parsers and more. Stay tuned for that!
—
? Like what you read? Did it help?you?
Send some coffee and love https://buymeacoffee.com/truvis?:) Your support helps pay for licenses, research & development, and other costs that allow me to bring you new guides and content!
?If you are new to my content, be sure to follow/connect with me on all my other socials for new ideas and solutions to complicated real world problems and jump start your career! New content drops daily/weekly along with tips and tricks?:)
?? W: https://truv.is
---
????????????????????????????????????????????????????????????