AWS Security & Encryption: KMS, SSM Parameter Store, Shield and WAF

AWS Security & Encryption: KMS, SSM Parameter Store, Shield and WAF

In this article we will go through the different security measures you can implement in order to secure your cloud services.


Encryption Overview

?

Encryption in Transit

Encryption in transit, often referred to as TLS (Transport Layer Security) or SSL (Secure Sockets Layer), ensures that data is encrypted before being transmitted and decrypted upon arrival. TLS is the latest version of SSL and is widely used to secure communication between clients and servers over a network.

A common example is HTTPS, which indicates that a website uses TLS certificates to encrypt the connection. This prevents unauthorized parties from intercepting or modifying data during transmission.

Encryption in transit is crucial, especially when data travels across public networks and multiple servers. Without encryption, attackers could perform a man-in-the-middle attack, intercepting and analyzing the transmitted data. By using HTTPS, TLS, or SSL, we ensure that only the intended recipient can decrypt and read the data.

For example, when logging into a server, a client sends a username and password. TLS encryption automatically secures this information before it leaves the client’s device. Once transmitted, no intermediary server can decrypt the data—only the target server, using the appropriate TLS decryption mechanism, can process the login credentials securely.

Server-Side Encryption at Rest

Server-side encryption protects data once it has been received by the server. After receiving and processing data, the server encrypts it before storing it securely. When the data is needed again, the server decrypts it before sending it back to the client.

Encryption at rest relies on encryption keys, typically referred to as data keys. The management of these keys is critical, as the server needs controlled access to them for encryption and decryption operations.

For example, in Amazon S3, a client uploads an object over HTTPS (ensuring encryption in transit). Once received, the service encrypts the object using a data key, securing it at rest. When the object is requested, the server retrieves the encrypted file, decrypts it using the data key, and transmits it securely over HTTPS. Since encryption and decryption occur entirely on the server, this is classified as server-side encryption.

Client-Side Encryption

Client-side encryption ensures that data is encrypted before it leaves the client’s device, and the server never has access to the decryption key. This approach is useful in scenarios where the client does not fully trust the storage provider.

In this model, encryption and decryption are handled exclusively on the client side. The encrypted data is then stored on a remote service such as an FTP server, Amazon S3, or an EBS volume. Since the encryption key is kept on the client’s device, the server cannot decrypt the stored data.

When retrieving the data, the client receives the encrypted object and must use its own encryption key to decrypt it. Without access to the key, neither the storage provider nor any intermediary entity can read the data.

?


AWS KMS Overview

What is AWS KMS?

AWS Key Management Service (KMS) is a managed service that allows AWS users to create, store, and manage encryption keys. Whenever encryption is mentioned in AWS services, it is most likely handled by KMS. The key benefit of KMS is that AWS manages encryption keys on your behalf, reducing operational overhead.

KMS is fully integrated with IAM (Identity and Access Management), making it easy to control access to encrypted data. Additionally, every API call involving KMS keys is logged in AWS CloudTrail, allowing for complete auditing and monitoring.

KMS seamlessly integrates with many AWS services, including:

  • EBS (Elastic Block Store) – Encrypt data at rest by enabling KMS integration.
  • S3 (Simple Storage Service) – Protect stored objects using KMS encryption.
  • RDS (Relational Database Service) – Secure databases with encryption.
  • SSM (AWS Systems Manager) – Store and manage secure parameters.

Users can also interact with KMS directly via API calls, AWS CLI, or AWS SDK. This allows encrypting sensitive information—such as secrets, credentials, or sensitive files—before storing them in environment variables or application code, ensuring security best practices.


Types of KMS Keys

AWS KMS provides two main types of encryption keys:

1. Symmetric KMS Keys

  • Uses a single encryption key for both encryption and decryption.
  • The key is never exposed to users; encryption and decryption are performed via KMS API calls.
  • Used by most AWS services that integrate with KMS.

2. Asymmetric KMS Keys

  • Uses a public-private key pair for encryption and decryption.
  • The public key is available for encryption, while the private key remains protected within KMS and is only accessible via API calls.
  • Commonly used for encryption by external users who cannot access AWS KMS directly.

For example, an external party can encrypt data using the public key and send it to an AWS user, who can then decrypt it using their private key within KMS.


Types of KMS Key Ownership

AWS provides different types of KMS keys, depending on management preferences and control requirements:

1.??? AWS-Owned Keys

  1. Free to use.
  2. Used in services like SSE-S3 and DynamoDB, where AWS handles encryption behind the scenes.
  3. Not directly accessible by users.

2.??? AWS-Managed Keys

  1. Also free and prefixed with AWS/service-name (e.g., AWS/RDS, AWS/EBS).
  2. Automatically created and managed by AWS.
  3. Can only be used within the service they are assigned to.

3.??? Customer-Managed Keys (CMKs)

  1. Fully controlled by the user.
  2. Costs $1 per key per month and additional API request fees ($0.03 per 10,000 requests).
  3. Supports automatic key rotation: AWS-managed keys rotate every year automatically. Customer-managed keys allow customizable automatic rotation. Imported keys require manual rotation using aliases.


KMS Key Policies

KMS keys require key policies to control access, similar to S3 bucket policies. Without a policy, no one can access the key. There are two types of key policies:

1.??? Default Key Policy

  1. Grants access to all users within the AWS account.
  2. If an IAM policy allows access to the key, users can utilize it.

2.??? Custom Key Policy

  1. Allows precise control over who can access or administer the key.
  2. Essential for cross-account access, enabling another AWS account to use the key securely.

Cross-Account Key Sharing Example

To share an encrypted snapshot between AWS accounts:

  1. Create a snapshot encrypted with a customer-managed key.
  2. Attach a custom key policy to authorize access from another AWS account.
  3. Share the encrypted snapshot with the target account.
  4. In the target account, copy the snapshot and re-encrypt it with a new customer-managed key.
  5. Create an EBS volume from the snapshot in the target account.


KMS Key Scope & Regional Constraints

KMS keys are region-specific, meaning a key created in one AWS region cannot be used in another. To move encrypted data across regions, you must:

  1. Create a snapshot of an encrypted EBS volume.
  2. The snapshot remains encrypted with the original KMS key.
  3. Copy the snapshot to another region, re-encrypting it with a new KMS key.
  4. Restore the snapshot into a new EBS volume in the target region.

AWS automates this re-encryption step, ensuring secure cross-region data transfers.

?



?

?

KMS Multi-Region Keys

AWS KMS (Key Management Service) supports Multi-Region keys, allowing encryption keys to be replicated across different AWS Regions. This means that a primary key is created in one Region, such as us-east-1, and then replicated to other Regions like us-west-2, eu-west-1, or ap-southeast-2. The replicated keys share the same key material and key ID, making them functionally identical across Regions.

Benefits of Multi-Region Keys

  • Seamless Cross-Region Encryption & Decryption: Data encrypted in one Region can be decrypted in another without requiring re-encryption or cross-Region API calls.
  • Consistent Key Rotation: If automatic rotation is enabled on the primary key, the updated key material is also propagated to its replicas.
  • Regional Independence: Each Multi-Region key is managed independently, including separate key policies and permissions.

Key Considerations

While Multi-Region keys are useful, they are not global keys. Each key instance is tied to a specific Region. AWS generally recommends using Region-specific keys unless there is a specific use case requiring Multi-Region keys.

Use Cases for Multi-Region Keys

  1. Global Client-Side Encryption: Encrypting data client-side in one Region and decrypting it in another.
  2. Encryption for Global AWS Services: Securing data in Global DynamoDB Tables or Global Aurora Databases.


Multi-Region Keys with DynamoDB Global Tables

When using DynamoDB Global Tables with client-side encryption, specific attributes—such as Social Security numbers—can be encrypted before storing them. This ensures that even database administrators without KMS access cannot decrypt sensitive data.

Example Workflow:

  1. A client application in us-east-1 encrypts a sensitive attribute using the primary Multi-Region key.
  2. The encrypted data is stored in a DynamoDB Global Table and automatically replicated to ap-southeast-2.
  3. A client in ap-southeast-2 retrieves the data and decrypts it locally using the replica Multi-Region key via a KMS API call.

This approach improves security and ensures decryption can happen without cross-Region API calls, reducing latency.


Multi-Region Keys with Global Aurora Databases

A similar encryption strategy can be applied to Amazon Aurora Global Databases, where the AWS Encryption SDK is used to encrypt sensitive columns.

Example Workflow:

  1. A client in us-east-1 encrypts an SSN column before inserting it into an Aurora database table.
  2. The encrypted data is replicated to ap-southeast-2 as part of the Global Aurora Database.
  3. A client in ap-southeast-2 retrieves the encrypted data and decrypts it locally using the replica Multi-Region key.

By using Multi-Region keys, the database administrators in ap-southeast-2 cannot access the sensitive data unless they have permission to use the KMS key. This approach improves security while reducing latency for applications that operate across multiple AWS Regions.

?


S3 Replication with Encryption

Let’s explore S3 Replication and how it interacts with encrypted objects.

When S3 Replication is enabled between two buckets, unencrypted objects and those encrypted using SSE-S3 are replicated by default. Additionally, objects encrypted with SSE-C (using a customer-provided key) can also be replicated.

However, objects encrypted with SSE-KMS are not replicated by default. To enable replication for these objects, you must explicitly configure it. This involves specifying the KMS key to encrypt the replicated objects in the destination bucket, updating the KMS key policy for the target key, and creating an IAM role that grants the S3 Replication service permission to:

  1. Decrypt the source bucket data
  2. Re-encrypt it in the target bucket using the specified KMS key

Since this process involves frequent encryption and decryption, you may encounter KMS throttling errors. If this happens, you may need to request a service quota increase.

A common question is whether multi-region keys should be used for S3 Replication. According to AWS documentation, multi-region keys can be used, but they are still treated as independent keys by the S3 service. This means that even with a multi-region key, objects are first decrypted and then re-encrypted, rather than being seamlessly replicated without decryption.

?


Encrypted AMI Sharing Process

To share an Amazon Machine Image (AMI) with another AWS account while keeping it encrypted with a KMS key, follow these steps:

  1. Grant AMI Launch Permissions The AMI resides in account A and is encrypted with a KMS key. To allow account B to launch an EC2 instance using this AMI, modify the AMI properties and add launch permissions for account B’s AWS account ID.
  2. Share the KMS Key Since the AMI is encrypted, account B must have access to the KMS key used for encryption. Update the KMS key policy to allow account B to use the key.
  3. Set Up IAM Permissions in Account B In account B, create an IAM role or IAM user with the required permissions to use both the AMI and the KMS key. Ensure the role has access to the following KMS API actions: DescribeKey ReEncrypt CreateGrant Decrypt
  4. Launch the EC2 Instance Once permissions are configured, account B can launch an EC2 instance using the shared AMI. If needed, account B can re-encrypt the volumes with its own KMS key for additional security.

By following these steps, account B can successfully use the encrypted AMI from account A to launch EC2 instances.

?


SSM Parameter Store Overview

Let’s dive into the SSM Parameter Store. It serves as a secure storage solution for your configurations and secrets. You can optionally encrypt these configurations, turning them into secrets, by utilizing the KMS service. SSM Parameter Store is serverless, scalable, durable, and its SDK is user-friendly. Additionally, any changes you make to your parameters are tracked with version history.

Security is managed through IAM, and you’ll receive notifications via Amazon EventBridge under certain circumstances. It integrates seamlessly with CloudFormation, meaning you can use parameters from the Parameter Store as input for your CloudFormation stacks.

Example Use Case

Imagine you have an application, and you want to store configurations in the SSM Parameter Store. You can store plain-text configurations directly, with IAM permissions controlling access. For example, your EC2 instance role will be checked, or you can choose to store encrypted configurations, which will be encrypted and decrypted using KMS. It’s important that your applications have access to the required KMS keys for encryption and decryption.

Parameter Hierarchy

The Parameter Store allows you to organize parameters with a hierarchical structure. For example, you could define a path like /my-department/my-app/dev/ and store parameters like DB-URL and DB-password under that. Similarly, you can define parameters for different environments such as /prod/DB-URL and /prod/DB-password. This hierarchy provides flexibility in organizing parameters and simplifies IAM policies, enabling applications to access parameters at different levels like an entire department, app, or specific environment.

You can also reference secrets from Secrets Manager within the Parameter Store using a reference like: /aws/reference/secretsmanager/secret_ID_in_Secrets_Manager.

AWS also provides public parameters, which are useful for accessing specific values like the latest AMI for Amazon Linux 2 in your region through an API call in the Parameter Store.

Example with Lambda Functions

For instance, you could have a Dev Lambda function with an IAM role allowing access to the Dev DB-URL and DB-password. Similarly, a Prod Lambda function, with a separate IAM policy, could access the Prod DB-URL and DB-password from another path, perhaps with different environment variables.

Parameter Tiers

There are two types of parameter tiers in Systems Manager: Standard and Advanced. The main differences are:

  • Size: Standard parameters are up to 4 KB, while Advanced parameters can be up to 8 KB.
  • Parameter Policy: Standard parameters do not have a parameter policy, whereas Advanced parameters offer more advanced features.
  • Pricing: Standard parameters are free, while Advanced parameters cost $0.05 per month.

Parameter Policies

Available only for advanced parameters, parameter policies allow you to assign a time-to-live (TTL) to a parameter, meaning it will expire at a specified date and time. This feature ensures that sensitive data, such as passwords, is updated or deleted when necessary. You can apply multiple policies to a parameter at once. For example, an expiration policy could specify that a parameter must be deleted by a particular timestamp.

?

Through the EventBridge integration, EventBridge will receive notifications about the parameter's status. For instance, 15 days before a parameter expires, EventBridge will send a notification, giving us ample time to update the parameter and ensure it doesn't get deleted due to the TTL.

?

Sometimes, you might want to ensure that parameters are updated periodically. In such cases, you can set up a "no change" notification in EventBridge. This way, if a parameter hasn't been updated for 20 days, you'll receive a notification. The Parameter Store offers a lot of flexibility, allowing you to get creative with how you manage your parameters.




?

AWS Secrets Manager – Overview

AWS Secrets Manager is a simple yet powerful service designed for securely storing and managing secrets. Unlike the SSM Parameter Store, Secrets Manager allows you to enforce automatic rotation of secrets at a specified interval, ensuring better security and management.

A key advantage of Secrets Manager is its ability to automate the generation and rotation of secrets. This is achieved by defining a Lambda function that generates new secrets automatically. Additionally, Secrets Manager integrates seamlessly with various AWS services, including Amazon RDS for MySQL, PostgreSQL, SQL Server, and Aurora, as well as other databases. With this integration, database credentials—such as usernames and passwords—are securely stored, rotated, and managed within Secrets Manager.

Secrets stored in Secrets Manager can also be encrypted using AWS Key Management Service (KMS), adding an extra layer of security. Whenever you come across secret management for RDS, Aurora, or other AWS services, think of Secrets Manager as the go-to solution.

Multi-Region Secrets

Another important feature of Secrets Manager is multi-region secrets replication. This allows you to replicate secrets across multiple AWS regions while keeping them synchronized with the primary secret. For example, if you create a secret in one region, it can be automatically replicated to a secondary region.

Why is this useful?

  1. High Availability & Disaster Recovery – In case of a failure in one region (e.g., US East 1), a replica secret can be promoted as a standalone secret, ensuring continued access.
  2. Multi-Region Applications – If you're running applications across multiple regions, having replicated secrets ensures smooth and secure authentication.
  3. RDS Cross-Region Replication – If an RDS database is replicated across regions, the corresponding secret can also be shared across those regions for consistent access.

By leveraging AWS Secrets Manager, organizations can enhance security, automate secret rotation, and ensure seamless multi-region management.

?


?

AWS Certificate Manager (ACM) – Overview

AWS Certificate Manager (ACM) is a service that simplifies the provisioning, management, and deployment of TLS certificates on AWS. TLS (often referred to as SSL) is essential for encrypting data in transit, securing websites, and enabling HTTPS connections.

For example, when you visit a website using HTTPS, the "S" indicates that a TLS certificate is in place to secure the communication. In AWS, ACM integrates seamlessly with services like Application Load Balancers (ALBs) to provide secure HTTPS endpoints. By associating an ALB with ACM, you can automatically provision and manage TLS certificates, ensuring encrypted connections for your applications and APIs.

Key Features of ACM

  1. Public & Private TLS Certificates ACM supports both public and private TLS certificates. Public TLS certificates are free of charge. Certificates are automatically renewed, reducing administrative overhead.
  2. Seamless AWS Integrations ACM works with Elastic Load Balancers (ALB, NLB, CLB), CloudFront distributions, and API Gateway. However, ACM cannot be used to generate public certificates for EC2 instances. Public certificates in ACM are non-extractable, meaning they cannot be exported for direct use on EC2.

Requesting a Public Certificate

To obtain a public TLS certificate through ACM, follow these steps:

  1. Define Domain Names Certificates can be issued for fully qualified domain names (FQDNs) such as corp.example.com. Wildcard certificates (*.example.com) are also supported, covering multiple subdomains.
  2. Choose a Validation Method DNS Validation (preferred for automation): Requires adding a CNAME record to your DNS, which ACM can handle automatically if you use Route 53. Email Validation: ACM sends verification emails to the domain registrar’s contact addresses to confirm ownership.
  3. Issuance & Automatic Renewal After validation, the certificate is issued and enrolled for automatic renewal 60 days before expiration.

Handling Imported Certificates

If you generate a TLS certificate outside ACM, you can import it into ACM. However, imported certificates do not support automatic renewal. Before expiration, you must manually import a new certificate.

To monitor expiration, ACM sends daily expiration notifications starting 45 days before expiry via Amazon EventBridge. You can configure EventBridge to trigger actions (e.g., sending alerts via SNS or renewing certificates using Lambda). Additionally, AWS Config offers a managed rule (ACM-certificate-expiration-check) to track expiring certificates and flag non-compliant ones.

ACM Integration with AWS Services

  1. Application Load Balancer (ALB) Integration ALBs support ACM certificates for securing traffic. You can configure automatic HTTP to HTTPS redirects, ensuring all connections use a secure protocol.
  2. API Gateway Integration ACM works with Edge-Optimized and Regional API Gateway endpoints. Edge-Optimized Endpoints: Requests are routed through CloudFront to improve latency. The TLS certificate must be in us-east-1, where CloudFront is managed. Regional Endpoints: Clients access the API Gateway directly within the same AWS region. Private API Endpoints: Used for VPC-restricted APIs, requiring additional resource policies for access control.

Conclusion

AWS Certificate Manager (ACM) simplifies TLS certificate management, automating issuance, renewal, and deployment across AWS services. Whether securing an ALB, API Gateway, or CloudFront distribution, ACM helps maintain encrypted connections effortlessly while reducing administrative overhead.

?


?

?

Your API Gateway operates within a single AWS region, but distribution is handled through CloudFront. Since CloudFront is based in the us-east-1 region, all ACM certificates used with it must also be created and stored there. To complete the setup, you need to configure a CNAME or alias record in Route 53.

?

?


Regional endpoints are designed for clients within the same region as your API Gateway. Since there is no CloudFront distribution involved, the TLS certificate must be imported directly into API Gateway within the same region as the API stage. In this example, ACM is hosted in ap-southeast-2. Finally, a CNAME or alias record is configured in Route 53 to map the domain to your API Gateway.




?

AWS Web Application Firewall (WAF) – Overview

AWS Web Application Firewall (WAF) is designed to protect web applications from common HTTP-based attacks at Layer 7 of the OSI model. Unlike Layer 4, which handles TCP and UDP traffic, WAF specifically protects against HTTP threats such as SQL injection and cross-site scripting (XSS).

Where Can You Deploy AWS WAF?

AWS WAF can be deployed on:

  • Application Load Balancer (ALB)
  • API Gateway
  • CloudFront (global distribution)
  • AWS AppSync (GraphQL API)
  • Amazon Cognito User Pools

Note: WAF cannot be deployed on a Network Load Balancer (NLB) since NLB operates at Layer 4 while WAF is designed for Layer 7 traffic.

Web ACLs & Rule-Based Protection

Once WAF is deployed, you define Web Access Control Lists (Web ACLs) to enforce security rules. These rules allow you to:

  • Filter traffic based on IP addresses (IP sets can store up to 10,000 IPs each)
  • Inspect HTTP headers, request body, and URI strings for known attack patterns
  • Set size constraints (e.g., limit requests to 2MB)
  • Block or allow traffic from specific geographic locations (geo-based filtering)
  • Implement rate-based rules to mitigate DDoS attacks (e.g., block IPs sending >10 requests per second)

Regional vs. Global Scope:

  • Web ACLs are regional for most services.
  • For CloudFront, Web ACLs are global since CloudFront operates worldwide.

Additionally, Rule Groups allow you to create reusable sets of rules that can be applied across multiple Web ACLs for easier management.

Using AWS WAF with a Fixed IP Application

One common challenge is obtaining a fixed IP address for applications while using WAF with an Application Load Balancer (ALB). Since ALB does not provide fixed IPs, the solution is to use AWS Global Accelerator in front of ALB.

Architecture for Fixed IP with WAF & ALB

  1. Deploy ALB and EC2 instances in a region.
  2. Use AWS Global Accelerator to assign fixed IP addresses to the application.
  3. Attach AWS WAF to the ALB, securing HTTP requests.
  4. Apply Web ACLs in the same region as the ALB for protection.

By integrating Global Accelerator with WAF on ALB, you achieve a fixed IP for your application while maintaining advanced security.




?

AWS Shield – Overview

AWS Shield is a service designed to protect your infrastructure from DDoS (Distributed Denial of Service) attacks.

What is DDoS? DDoS attacks occur when a network or application is flooded with a large number of requests from multiple computers across the globe. The goal is to overwhelm and overload the target infrastructure, making it unable to serve legitimate users, resulting in a "denial of service."

AWS Shield Protection Options

  1. AWS Shield Standard Free and automatic for all AWS customers. Protects against common attacks like SYN floods, UDP floods, reflection attacks, and other Layer 3 or Layer 4 threats.
  2. AWS Shield Advanced Paid service (approximately $3,000/month per organization). Provides advanced DDoS mitigation for services like Amazon EC2, Elastic Load Balancing, Amazon CloudFront, Global Accelerator, and Route 53. Includes 24/7 access to the AWS DDoS Response Team for assistance during attacks. Cost Protection: If you're charged extra due to the attack, Shield Advanced helps you offset these additional fees.

Additional Features of AWS Shield Advanced

  • Automatic Layer 7 DDoS Mitigation: Shield Advanced includes built-in protection for application-layer attacks (Layer 7). It automatically generates, evaluates, and deploys WAF rules to help mitigate these attacks without manual intervention. This ensures your Web Application Firewall (WAF) is always ready to defend against new threats.

?


?

AWS Firewall Manager – Overview

Now let’s discuss AWS Firewall Manager, a service designed to help you manage firewall rules across all accounts within an AWS organization. With this service, you can easily apply security policies and manage rules on a large scale, making it much easier to maintain consistent security across multiple accounts.

Key Features of AWS Firewall Manager

  • Centralized Rule Management: AWS Firewall Manager enables you to create and enforce security policies that define common sets of firewall rules. These rules can include: Web Application Firewall (WAF) rules: For Application Load Balancers (ALB), API Gateways, CloudFront, etc. Shield Advanced rules: For ALBs, Classic Load Balancers (CLB), Network Load Balancers (NLB), Elastic IPs, CloudFront, etc. Security group policies: Standardize security groups for EC2, ALBs, and Elastic Network Interfaces (ENIs) within your VPC. AWS Network Firewall rules at the VPC level. Route 53 Resolver DNS Firewall settings.
  • Policy Application: Policies are defined at the region level and are applied across all accounts within your organization. This means you can enforce a consistent set of rules throughout your entire AWS environment.
  • Automatic Rule Application: If a new resource, like an ALB, is created, Firewall Manager automatically applies the relevant rules to the new resource. This ensures your new resources are immediately protected according to your security policies.

WAF, Shield, and Firewall Manager – How They Work Together

You might wonder how WAF, Shield, and Firewall Manager interact. These services complement each other to provide comprehensive protection:

  • WAF: You define Web ACL rules in WAF for one-time protection or when you want to protect a specific resource.
  • Firewall Manager: If you need to manage WAF rules across multiple accounts, automate the application of security rules, and ensure that new resources are automatically protected, Firewall Manager is the solution. It streamlines the process by applying WAF rules to all your accounts and resources automatically.
  • Shield Advanced: Provides additional DDoS protection and features beyond WAF, such as dedicated support from the Shield Response Team (SRT), advanced reporting, and the ability to automatically create WAF rules for you. If you face frequent DDoS attacks, Shield Advanced is highly recommended.

Summary

In summary, Firewall Manager helps centralize and automate the application of security policies across multiple AWS accounts, ensuring that your resources stay protected. It works seamlessly with WAF and Shield Advanced to provide layered, comprehensive protection across all your accounts.

?


?

Amazon Guard Duty

Let’s dive into Amazon GuardDuty.

GuardDuty is a service designed to help you discover threats intelligently and protect your AWS accounts. It uses machine learning, anomaly detection, and third-party data to identify potential security issues. Enabling GuardDuty is simple—just a single click, and you get a 30-day trial with no need for any software installation.

GuardDuty analyzes various types of input data, including CloudTrail event logs, where it looks for unusual API calls or unauthorized deployments. It also monitors management and data events. For instance, on the management side, it tracks activities like creating VPC subnets, while on the S3 side, it watches for actions such as retrieving or deleting objects. It also reviews VPC flow logs for abnormal internet traffic, and suspicious IP addresses. For DNS logs, GuardDuty identifies any EC2 instances sending encoded data within DNS queries, which could indicate a compromise.

In addition, you can enable optional features to examine more data sources, including EKS audit logs, RDS and Aurora login events, Lambda, EBS, and S3 data events. To make it easier, you can set up EventBridge rules to automatically notify you if any findings are detected. These notifications can trigger various targets, such as Lambda functions or SNS topics.

A key feature of GuardDuty is its ability to detect cryptocurrency attacks, and it has specific findings dedicated to identifying such threats.

To recap, GuardDuty works with various data sources, including VPC flow logs, CloudTrail logs, and DNS logs, which are always part of its analysis. You can also enable additional features like EBS, Lambda, RDS, Aurora, and EKS logs. Whenever GuardDuty detects a finding, it generates an event in Amazon EventBridge. From there, you can automate responses or send alerts via Lambda or SNS, depending on your configured rules.

?


?Amazon Inspector

Let's discuss Amazon Inspector.

Amazon Inspector is a service designed to automate security assessments for a variety of AWS resources. It performs security checks on EC2 instances, container images, and Lambda functions.

For EC2 instances, Amazon Inspector works with the Systems Manager agent installed on the instance to continuously assess its security. It checks for vulnerabilities by evaluating network accessibility and the operating system for known issues.

When it comes to container images, Amazon Inspector scans Docker images pushed to Amazon ECR. It analyzes these images for any known vulnerabilities, ensuring your containers are secure.

For Lambda functions, Amazon Inspector performs assessments on the function code and its dependencies as they're deployed. It checks for any software vulnerabilities present in the deployed code and package dependencies.

Once the assessments are complete, Amazon Inspector sends its findings to AWS Security Hub and also creates events in Amazon EventBridge. This integration allows you to centrally track vulnerabilities and, with EventBridge, automate responses or workflows based on these findings.

Amazon Inspector evaluates the security of running EC2 instances, container images in ECR, and Lambda functions. It continuously scans for vulnerabilities based on CVE (Common Vulnerabilities and Exposures) data and checks network reachability for EC2 instances. If the CVE database is updated, Amazon Inspector automatically reruns assessments to ensure your infrastructure stays up-to-date. Each run results in a risk score for the identified vulnerabilities, helping you prioritize fixes.

?


Amazon Macie

Let's now discuss Macie.

Macie is a fully managed service for data security and privacy that leverages machine learning and pattern matching to discover and protect sensitive information within AWS. Its primary function is to identify and alert you about sensitive data, specifically personally identifiable information (PII).

Essentially, Macie will scan your S3 buckets to detect PII, such as names, addresses, and credit card numbers. Once it identifies this sensitive data, it sends notifications through EventBridge, which you can integrate with SNS topics, Lambda functions, and more.

The process is straightforward—simply enable Macie with one click, specify the S3 buckets you want it to analyze, and Macie takes care of the rest. Its sole purpose is to discover sensitive data within your S3 buckets.

Kamalpreet K. Martinez

? ?? Open for New Opportunities in Cloud/AI ? AWSCloudClubPICT Member ? 1x AWS AI | 1x CEH | 1x CCNA | Tech & Content Writer ? ?? Community Advocate

2 周

Insightful ! Thankyou for sharing

Rajpreet Gill

Software Engineer | AWS Certified (x3) – SAA, DVA, CCP | DevOps & Cloud Enthusiast

2 周

Very informative

要查看或添加评论,请登录

Alex Enson的更多文章

  • Identity and Access Management (IAM)

    Identity and Access Management (IAM)

    In a nutshell, AWS IAM lets you manage user access to AWS resources by assigning roles and permissions, ensuring secure…

    2 条评论
  • AWS Monitoring & Audit

    AWS Monitoring & Audit

    This article is about the various monitoring and audit tools used in AWS. AWS monitoring and audit tools, such as…

    2 条评论
  • Machine Learning

    Machine Learning

    Lets have a look at the machine learning tools in AWS. Amazon Rekognition is a machine learning service for analysing…