Ingesting AWS Guard Duty findings to Azure Sentinel
NOTICE: There's an official AWS S3 bucket connector available for Microsoft Sentinel. It is recommended to use that instead!! - Connect Microsoft Sentinel to Amazon Web Services to ingest AWS service log data | Microsoft Docs
Many of the organization using Amazon Web Services are also consuming AWS Guard Duty to monitor their cloud & detect anomalous behavior.
Until now, ingesting findings by Guard Duty to Azure Sentinel SIEM hasn't been too straightforward. Now it is.
Before jumping to the topic I want to warmly thank Sreedhar Ande, author of the connector, awesome job - works like a charm! and Ville P?ivinen for the work we did together on this subject - Ville was the one who got this working originally with the help of Sreedhar!
Now, let's go through step-by-step how to configure the connector:
1) Configure AWS Guard Duty and export findings to S3 bucket
2) Create IAM user with access to S3 bucket and KMS
3) Deploy Azure Sentinel Data connector to ingest AWS S3 files
4) Create Azure Sentinel analytics rules to raise incidents based on findings
1 - Configure AWS Guard Duty and export findings to S3 bucket
I'm not going to cover this part in detail. AWS does a good job explaining how to get started with AWS Guard Duty: Getting started with GuardDuty - Amazon GuardDuty
Important part here is to export findings to S3 bucket. You need to configure KMS key and policy where you allow Guard Duty to use the key.
It's also worth to mention you can configure frequency how often the findings are exported to a bucket - I used the minimum, 15 minutes.
2 - Create IAM user with access to S3 bucket and KMS
Next step is to allow external access for this S3 bucket. We'll need read access to S3 bucket, but also kms:decrypt access for the particular key we're using.
Open Identity and Access Management (IAM)
1) Policies > Create policy
2) Service - find and select KMS
3) Actions - find and select Decrypt
4) Resources - Specific - Add ARN - Specify ARN for key > add you KMS Key ARN here
5) Next: Tags > Next: Review > Give a name for your IAM policy.
Then
6) Users > Add user
7) Define User name
8) Access type: Programmatic access
9) Next: Permissions >> Attach existing policies directly
10) Search for "s3read" and select "AmazonS3ReadOnlyAccess"
11) Search for policy name you created in step #5 and select
12) Next: Tags > Next: Review > Create user
13) IMPORTANT: write down the Access key ID and Secret access key - you'll need these soon
3 - Deploy Azure Sentinel Data connector to ingest AWS S3 files
To deploy the data connecter, we'll get it from Azure Sentinel GitHub: Azure-Sentinel/DataConnectors/AWS-S3-AzureFunction at master · Azure/Azure-Sentinel · GitHub
Find the "Deploy to Azure" button and click. This will trigger a custom deployment to Azure.
1) Define subscription & resource group - best practice is to create new resource group
2) Define Region, eg. West Europe
3) Define Workspace ID & Workspace Key - you can get these from Log Analytics workspaces > your Sentinel workspace > Agents management >> Workspace ID and Primary key are the ones you need
领英推荐
4) Define AWS Access Key Id & Secret - these are the ones you got in in section 2, step #13.
5) Define AWS Region Name - my bucket is in eu-west-1
6) Define S3Bucket URI - eg. s3://your-bucket-name/AWSLogs/ - you can get this directly from S3 bucket - open the bucket, select "AWSLogs" -folder and select "Copy S3 URI" from top-right corner
7) Define S3Folder Name - this should be "GuardDuty"
8) Define Log Analytics Custom Log Name - this will be the table name in Sentinel
9) Choose Review + create > Create
The deployment will take a moment, it'll create you: Key Vault, Storage account, Application Insights and Function App.
If everything goes well, you're done with the connector! By default, Function app polls the S3 bucket every 10 minutes.
You can observe the process by opening the Resource group > Function app > left menu and in the bottom there's "Log Stream". If there's errors, most likely some configuration in Function app or in AWS side isn't right. If you need to troubleshoot, first place to look at is Settings/Configuration (variables) in Function app. If you want to adjust default poll interval, there's two variables you need to change - those are documented in connector's GitHub page.
Now you should have new table in Sentinel logs, in my case "AWSGuardDutyFindings_CL", with all the findings. Try this KQL
AWSGuardDutyFindings_CL
|? extend countryName_ = tostring(parse_json(service_action_awsApiCallAction_remoteIpDetails_country_s).countryName)
| extend cityName_ = tostring(parse_json(service_action_awsApiCallAction_remoteIpDetails_city_s).cityName)
| summarize by TimeGenerated, type_s, description_s, Severity, service_action_awsApiCallAction_api_s, service_action_awsApiCallAction_serviceName_s, service_action_awsApiCallAction_callerType_s, service_action_awsApiCallAction_remoteIpDetails_ipAddressV4_s, countryName_, cityName_, resource_accessKeyDetails_userName_s, resource_accessKeyDetails_userType_s
4 - Create Azure Sentinel analytics rules to raise incidents based on findings
Now that we've got the data in, we most probably want to create incidents out of it.
Based on AWS Guard Duty documentation:
Let's go to Sentinel and Analytics
1) Choose Create > Scheduled query rule
2) Choose Name, eg. "AWS GuardDuty finding"
3) Choose Severity "High"
4) In Rule Logic, as an example use this one - it's now looking for only findings with severity equal or higher than 7 (you can fine tune this, eg. remove summarize parameter)
AWSGuardDutyFindings_CL
| extend countryName_ = tostring(parse_json(service_action_awsApiCallAction_remoteIpDetails_country_s).countryName)
| extend cityName_ = tostring(parse_json(service_action_awsApiCallAction_remoteIpDetails_city_s).cityName)
| where Severity >= 7
| summarize by TimeGenerated, type_s, description_s, Severity, service_action_awsApiCallAction_api_s, service_action_awsApiCallAction_serviceName_s, service_action_awsApiCallAction_callerType_s, service_action_awsApiCallAction_remoteIpDetails_ipAddressV4_s, countryName_, cityName_, resource_accessKeyDetails_userName_s, resource_accessKeyDetails_userType_s
5) Define Entity mapping as you see best, my example
6) Define Alert details - these will customize the Alert name & description based on log data
7) Define Query scheduling as you see best, I used 30 min + 30 min.
8) In Event grouping I used "Trigger an alert for each event"
9) Next: Incident settings
10) I enabled Alert groping, with 5 hours limit and grouping alerts if all entities match. You can fine tune these.
11) next, next, save.
If you want, duplicate the rule and modify it as medium severity and query to look for "Severity < 7 and Severity >= 4".
DONE! ??
For testing purposes, in AWS Guard Duty settings you can "Generate sample findings". Those will now flow through S3 and Azure Function app to Log analytics workspace and through analytics rule as Incident!
Your incidents will look like this (notice multiple alerts correlated into an incident):
And Incident will look like this:
ENJOY! ??
The same data connector can be used to ingest other data from S3 too, eg. If you're collecting organization wide CloudTrail into single bucket.
Cloud Security Architect at Amazon Web Services (AWS)
2 年I have one question, if i got a Orgnisation with different subfolder after s3://your-bucket-name/AWSLogs/ as the number of the account should a create a logapp for each account ?
This is great news - thanks for documenting and posting
Cloud Security Consultant @ Zure | All Things Microsoft Security ??| tommihovi.com
3 年Anton Floor ??
Very nice!