How to analyze 500 servers/PCs/Switches etc. event logs or sys logs cost effectively?
Samudra (Sam) Dutta Gupta
WMI/VB script/PowerShell, cyber-security, Forensics, Windows Migration, Zabbix, Nagios, Elasticsearch, AWS, SPLUNK
Open source Elastic Search along with Kibana and Logstash, in short ELK stack, as we call it can solve your problem. And they can be easily installed and implemented on CentOS or Ubuntu like Linux boxes, which are open sources too.
Elasticsearch is a search and analytics engine.
Logstash is a server?side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch.
Kibana lets users visualize data with charts and graphs in Elasticsearch.
Because this combination is not secured one, hence I used Nginx to provide at least one user id and password, before I can access the Kibana dashboard.
So, my lab setting has one VM of CentOS 7, where I installed everything of elasticsearch stack, including nginx web server to protect those vital logs. And hence, when I tried to access Kibana, I got the following screenshot:
Once I had that user id and password, I land up in the following screen:
Now to use the new data visualization feature, I went to “Machine learning” menu, and arrived at the below screen:
I then, imported the data, from my own PC, to set up an example case, for this article (as evidently, no company will allow me to use their own data for security reason) and found the following:
Now think, if you had to analyse data from 500 PCs, daily to improve the services in your organization, what a massive task, it could become. Instead, you can now utilize this new feature to visualize the data, present it to your boss, and get the much awaited bonus!
Awesome way of selling the solutions