2025 - Week 10 (3 Mar - 9 Mar)

2025 - Week 10 (3 Mar - 9 Mar)

Amazon Athena Provisioned Capacity now available in the Asia Pacific (Mumbai) Region

Published Date: 2025-03-07 21:50:00

Amazon Athena Provisioned Capacity is now available in the Asia Pacific (Mumbai) Region. Provisioned Capacity allows you to run SQL queries on dedicated serverless resources for a fixed price, with no long-term commitment, and control workload performance characteristics such as query concurrency and cost. Athena is a serverless, interactive query service that makes it possible to analyze petabyte-scale data with ease and flexibility. Provisioned Capacity provides workload management capabilities that help you prioritize, isolate, and scale your workloads. For example, use Provisioned Capacity when you need to run a high number of queries at the same time or isolate important queries from other queries that run in the same account. To get started, use the Athena console, AWS SDK, or CLI to request capacity and then select workgroups with queries you want to run on dedicated capacity. For more information on AWS Regions where Provisioned Capacity is available, see Manage query processing capacity. To learn more, visit Manage query processing capacity in the Amazon Athena User Guide and the Athena pricing page.

Amazon WorkSpaces Pools now supports FIPS 140-2 validated endpoints

Published Date: 2025-03-07 18:00:00

Amazon WorkSpaces Pools now offers Federal Information Processing Standard 140-2 (FIPS) validated endpoints (FIPS endpoints) for user streaming sessions. FIPS 140-2 is a U.S. government standard that specifies the security requirements for cryptographic modules that protect sensitive information. WorkSpaces Pools FIPS endpoints use FIPS-validated cryptographic standards, which may be required for certain sensitive information or regulated workloads. To enable FIPS endpoint encryption for end user streaming via AWS Console, navigate to Directories, and verify that the Pools directory where you want to add FIPS is in a STOPPED state, and that the preferred protocol is set to TCP. Once verified, select the directory and on the Directory Details page update the endpoint encryption to FIPS 140-2 Validated Mode and save. FIPS support is available for WorkSpaces Pools in 4 AWS regions: AWS GovCloud (US-East); AWS GovCloud (US-West); US East (N. Virginia); and US West (Oregon). For more information about using FIPS endpoints in WorkSpaces Pools, see Configure FedRAMP authorization or DoD SRG validated for WorkSpaces Pools. For more information about how AWS supports FIPS, including a list of WorkSpaces Pools endpoints, see Federal Information Processing Standard (FIPS) 140-2.

Amazon OpenSearch Serverless now available in AWS Europe (Spain) Region

Published Date: 2025-03-07 18:00:00

We are excited to announce that Amazon OpenSearch Serverless is expanding availability to the Amazon OpenSearch Serverless to Europe (Spain) Region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs). To control costs, customers can configure maximum number of OCUs per account. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Application Load Balancer announces integration with Amazon VPC IPAM

Published Date: 2025-03-07 18:00:00

AWS Application Load Balancer (ALB) now allows customers to provide a pool of public IPv4 addresses for IP address assignment to load balancer nodes. Customers can configure a public IP Address Manager (IPAM) pool that can consist of either Bring Your Own IP addresses (BYOIPs) that is customer owned or a contiguous IPv4 address block provided by Amazon. With this feature, customers can optimize public IPv4 cost by using BYOIP in public IPAM pools. Customers can also simplify their enterprise allowlisting and operations, by using Amazon-provided contiguous IPv4 blocks in public IPAM pools. The ALB's IP addresses are sourced from the IPAM pool and automatically switch to AWS managed IP addresses when the public IPAM pool is depleted. This intelligent switching maximizes service availability during scaling events. The feature is available in all commercial AWS Regions and AWS GovCloud (US) Regions where Amazon VPC IP Address Manager (IPAM) is available. To learn more, please refer to the ALB Documentation. ?

Amazon Redshift Data API now supports single sign-on (SSO) with AWS IAM Identity Center

Published Date: 2025-03-07 18:00:00

Amazon Redshift Data API, which lets you connect to Amazon Redshift through a secure HTTPS endpoint, now supports single sign-on (SSO) through AWS IAM Identity Center. Amazon Redshift Data API removes the need to manage database drivers, connections, network configurations, and data buffering, simplifying how you access your data warehouses and data lakes. AWS IAM Identity Center lets customers connect existing identity providers from a centrally managed location. You can now use AWS IAM Identity Center with your preferred identity provider, including Microsoft Entra Id, Okta, and Ping, to connect to Amazon Redshift clusters through Amazon Redshift Data API. This new SSO integration simplifies identity management, so that you don’t have to manage separate database credentials for your Amazon Redshift clusters. Once authenticated, your authorization rules are enforced using the permissions defined in Amazon Redshift or AWS Lake Formation. You can get started by integrating your Amazon Redshift cluster or workgroup with AWS Identity Center (IdC), and then allow Amazon Redshift to access AWS services programmatically using trusted identity propagation. This feature is available in all AWS Regions where both AWS IAM Identity Center and Amazon Redshift are available. For more information, see our documentation and blog. ?

AWS HealthOmics workflows now support NVIDIA L4 and L40S GPUs and expanded CPU options

Published Date: 2025-03-07 18:00:00

AWS HealthOmics now supports the latest NVIDIA L4 and L40S graphical processing units (GPUs) and larger compute options of up to 192 vCPUs for workflows. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed biological data stores and workflows. This release expands workflow compute capabilities to support more demanding workloads for genomics research and analysis. In addition to current support for NVIDIA A10G and T4 GPUs, this release adds support for NVIDIA L4 and L40S GPUs, which enables researchers to efficiently run complex machine learning workloads such as protein structure prediction and biological foundation models (bioFMs). The enhanced CPU configurations with up to 192 vCPUs and 1,536 GiB of memory allows for faster processing of large-scale genomics datasets. These improvements help research teams reduce time-to-insight for critical life sciences work. NVIDIA L4 and L40S GPUs and 128 and 192 vCPU omics instance types are now available in: US East (N. Virginia) and US West (Oregon). To get started with AWS HealthOmics workflows, see the documentation. ?

Amazon Bedrock Knowledge Bases supports GraphRAG now generally available

Published Date: 2025-03-07 18:00:00

Today, AWS announces the general availability of GraphRAG, a capability in Amazon Bedrock Knowledge Bases that enhances Retrieval-Augmented Generation (RAG) by incorporating graph data. GraphRAG delivers more comprehensive, relevant, and explainable responses by leveraging relationships within your data, improving how Generative AI applications retrieve and synthesize information. Since public preview, customers have leveraged the managed GraphRAG capability to get improved responses to queries from their end users. GraphRAG automatically generates and stores vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships. GraphRAG combines vector similarity search with graph traversal, enabling higher accuracy when retrieving information from disparate yet interconnected data sources. GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is generally available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.

Amazon Connect Contact Lens can now dynamically update the questions on an evaluation form

Published Date: 2025-03-07 18:00:00

Contact Lens now enables you to create dynamic evaluation forms that automatically show or hide questions based on responses to previous questions, tailoring each evaluation to specific customer interaction scenarios. For example, when a manager answers “Yes” to the form question "Did the customer try to make a purchase on the call?", the form automatically presents a follow-up question: "Did the agent read the sales disclosure?". With this launch, you can consolidate evaluation forms that are applicable to different interaction scenarios into a single dynamic evaluation form which automatically hides irrelevant questions. This reduces manager effort in selecting the relevant evaluation form and determining which evaluation questions are applicable to the interaction, helping managers perform evaluations faster and more accurately. This feature is available in all regions where Contact Lens performance evaluations are already available. To learn more, please visit our documentation and our webpage. For information about Contact Lens pricing, please visit our pricing page. ?

AWS WAF adds JA4 fingerprinting and aggregation on JA3 and JA4 fingerprints for rate-based rules

Published Date: 2025-03-06 20:26:00

AWS WAF now supports JA4 fingerprinting of incoming requests, enabling customers to allow known clients or block requests from malicious clients. Additionally, you can now use both JA4 and JA3 fingerprints as aggregation keys within WAF's rate-based rules, allowing you to monitor and control request rates based on client fingerprints. A JA4 TLS client fingerprint contains a 36-character long fingerprint of the TLS Client Hello which is used to initiate a secure connection from clients. The fingerprint can be used to build a database of known good and bad actors to apply when inspecting HTTP requests. These new features enhance your ability to identify and mitigate sophisticated attacks by creating more precise rules based on client behavior patterns. By leveraging both JA4 and JA3 fingerprinting capabilities, you can implement robust protection against automated threats while maintaining legitimate traffic flow to your applications. JA4 as a match statement is available in all regions where AWS WAF is available for Amazon CloudFront, and Amazon Application Load Balancer (ALB). JA3 and JA4 aggregation keys are available in all regions, except the AWS GovCloud (US) Regions, the China Regions, Asia Pacific (Melbourne), Israel (Tel Aviv) and Asia Pacific (Malaysia). There is no additional cost for using this feature, however standard AWS WAF charges still apply. For more information about pricing, visit the AWS WAF Pricing page.

Announcing MQTT enabled SiteWise Edge gateways for AWS IoT SiteWise

Published Date: 2025-03-06 18:00:00

Today, AWS announces the general availability of MQTT enabled SiteWise Edge gateways for AWS IoT SiteWise. AWS IoT SiteWise is a managed service that makes it easy to collect, store, organize, and analyze data from industrial equipment at scale. With this launch, newly created gateways now include an MQTTv5 broker component that centralizes connectivity between SiteWise Edge and customer built edge components. Now you can integrate communications between your own edge components and AWS IoT SiteWise Edge using the MQTT protocol in a publish and subscribe topology. This eliminates building point-to-point connections between edge components simplifying the integration of custom logic for edge data flows. You can build components at the edge for data contextualization. You can use your components to enrich equipment telemetry data with data from operations systems (MES, ERP, etc.) required in calculating key performance indicators (KPIs) such as Overall Equipment Effectiveness (OEE), uptime, and progress against production targets. Through AWS IoT SiteWise Edge, you have native integration of this data for storage and additional use cases in the AWS cloud. You can use your Unified Name Space (UNS), an industrial data normalization and organization pattern, at the edge and extend it with AWS cloud services. The new gateways securely transmit the equiment data streams of your choice to AWS IoT SiteWise, using existing organization, storage, and analytics features of the service with robust store and forward capabilities of SiteWise Edge. This feature is available in all AWS IoT SiteWise commercial regions. To learn more, please see our documentation, blogpost,?and example. ?

Announcing AWS Step Functions Workflow Studio for the VS Code IDE

Published Date: 2025-03-06 18:00:00

AWS Step Functions Workflow Studio is now available in the AWS Toolkit for Visual Studio Code, enabling you to visually create, edit, and debug state machine workflows directly in your local development environment. AWS Step Functions is a visual workflow service capable of orchestrating over 14,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads. Workflow Studio is a visual builder that allows you to compose workflows on a canvas, while generating workflow definitions in the background. Workflow Studio for VS Code brings the console experience to the IDE, making it easier to create workflows in your local development environment. The new IDE experience works with infrastructure as code tools and enables you to debug your workflow steps using the TestState API directly within the IDE. To get started, download the AWS Toolkit for VS Code, or update to the latest version. The AWS Toolkits are open source projects and you can submit issues or feature requests to open source GitHub repos for the Toolkit for VS Code. To learn more, please visit our documentation or read the launch blog. ?

Sharing of Connections is now available in AWS CodeConnections

Published Date: 2025-03-06 18:00:00

AWS CodeConnections now allows you to securely share your Connection resource across individual AWS accounts or within your AWS Organization. Previously, to create a Connection, you installed the AWS connector App for GitHub or GitLab or Bitbucket for each AWS account from which source access was required. You can now use AWS Resource Access Manager (RAM) to securely share a Connection to your third-party source provider across AWS accounts. By using AWS RAM to share your Connection resource, you no longer need to create a Connection in each AWS account. Instead, you can create a Connection in an AWS account, and then share the Connection across multiple AWS accounts. By using AWS RAM, you can also automate sharing the connection across AWS accounts, reducing the operational overhead to support a multi-account deployment strategy. To apply fine grained access control, in the AWS account with which a Connection is shared, you can use IAM policies to manage what operations an IAM role can perform. To learn more about sharing connections, visit our documentation. To learn more about what Connections in AWS CodeConnections are and how they work, visit our documentation.

Amazon EC2 M7a instances are now available in AWS Asia Pacific (Sydney) Region

Published Date: 2025-03-06 18:00:00

Starting today, the general-purpose Amazon EC2 M7a instances are now available in AWS Asia Pacific (Sydney) Region. M7a instances, powered by 4th Gen AMD EPYC processors (code-named Genoa) with a maximum frequency of 3.7 GHz, deliver up to 50% higher performance compared to M6a instances. With this additional region, M7a instances are available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney, Tokyo), and Europe (Frankfurt, Ireland, Spain, Stockholm). These instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the M7a instances page.

Amazon Q Developer announces a new CLI agent within the command line

Published Date: 2025-03-06 18:00:00

Today, Amazon Q Developer announced an enhanced CLI agent within the Amazon Q command line interface (CLI) that allows you to have more dynamic conversations. With this update, Amazon Q Developer can now use the information in your CLI environment to help you read and write files locally, query AWS resources or create code. You can now ask Q Developer to write code, test it, help debug issues, and Q Developer will iteratively make adjustments based on your feedback and approval. This allows you to efficiently complete tasks, improving and streamlining the development process, without needing to leave your terminal. The enhanced CLI agent, powered by Anthropic's most intelligent model to date, Claude 3.7 Sonnet, is available on Amazon Q Developer Free, and Pro tiers and in all AWS regions where Q Developer is available. Learn more.

Amazon OpenSearch Serverless now available in AWS US West (N. California) and Europe (Stockholm) Regions

Published Date: 2025-03-06 18:00:00

We are excited to announce that Amazon OpenSearch Serverless is expanding availability to AWS US West (SFO, N. California) and Europe (ARN, Stockholm) Regions. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs). To control costs, customers can configure maximum number of OCUs per account. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Amazon OpenSearch Serverless now available in AWS Europe (Milan) Region

Published Date: 2025-03-06 18:00:00

We are excited to announce that Amazon OpenSearch Serverless is expanding availability to the Amazon OpenSearch Serverless to AWS Europe (Milan) Region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs). To control costs, customers can configure maximum number of OCUs per account. Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation.

Announcing Amazon GameLift Streams

Published Date: 2025-03-06 14:00:00

Amazon GameLift Streams is a new managed capability that allows developers to stream games at up to 1080p resolution and 60 frames per second to any device with a WebRTC-enabled browser. In just a few clicks, you can upload games built with a variety of 3D engines with little to no modification, provision streaming capacity in specific AWS Regions, and immediately start test streaming. Players can start playing AAA, AA, and Indie games over the internet in just a few seconds on their PCs, phones, tablets, and smart TVs without waiting hours for a download.

With Amazon GameLift Streams, you can create new direct-to-player distribution channels, launch instant-play game demos, conduct secure playtesting, and expand monetization opportunities. With support for Windows, Linux, and Proton runtimes, Amazon GameLift Streams helps you avoid the expense and complexity of modifying and rebuilding game code for streaming. You can flexibly scale streaming up or down based on player demand, and only provision and pay for the capacity you need. You can choose from six AWS Regions to deliver low-latency game play closer to players around the world. This new capability opens opportunities for you to expand the reach, engagement, and sales of your games while maintaining full control over the player relationship, experience, branding, and business model.

Amazon GameLift Streams is a new capability of Amazon GameLift, a fully managed service on AWS empowering developers to build and deliver the world’s most demanding games. The new capability is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Paci?c (Tokyo), Europe (Frankfurt), and Europe (Ireland).

To learn more, visit the Amazon GameLift Streams website, read the Developer Guide, or explore the AWS News Blog post.?

Amazon Connect, Amazon WorkSpaces, and Amazon AppStream 2.0 are now Chrome Enterprise Recommended

Published Date: 2025-03-05 22:40:00

Amazon Connect, Amazon WorkSpaces, and Amazon AppStream 2.0 have earned Chrome Enterprise Recommended (CER) certification. This designation validates that these services are fully optimized for ChromeOS, ChromeOS Flex, and Chrome browser environments, ensuring seamless integration and performance for businesses using Chrome devices. These Chrome-optimized services deliver significant advantages for organizations with users gaining browser-based access to contact center capabilities through Amazon Connect, Windows or Linux virtual desktops through Amazon WorkSpaces, and streaming applications without refactoring with Amazon AppStream 2.0. Customers can take advantage of security features built into ChromeOS, while enabling cost savings through efficient scaling and elimination of traditional infrastructure. Organizations can get more out of their hardware investments by installing ChromeOS Flex on aging devices, repurposing them to run Windows 11 on Amazon WorkSpaces and bringing existing Microsoft 365 Apps for enterprise licenses to run on WorkSpaces. Amazon Connect, Amazon WorkSpaces, and Amazon AppStream 2.0 are available in multiple AWS Regions worldwide, allowing organizations to deploy these services closer to their end-users for optimal performance and compliance with data residency requirements. To learn more about leveraging these Chrome Enterprise Recommended services, visit Amazon Connect, Amazon WorkSpaces, or Amazon AppStream 2.0 . Contact your AWS account team to discuss your specific requirements and discover how these CER-certified solutions can transform your ChromeOS deployment.

Bottlerocket now supports NVIDIA Multi-Instance GPU (MIG) for Kubernetes workloads

Published Date: 2025-03-05 21:30:00

Today, AWS has announced that Bottlerocket, the Linux-based operating system purpose-built for containers, now supports NVIDIA's Multi-Instance GPU (MIG) feature, enabling customers to partition NVIDIA GPUs into multiple GPU instances on Kubernetes nodes. This capability allows system administrators to maximize GPU resource utilization by running multiple workloads simultaneously on a single GPU while maintaining hardware-level isolation between workloads. With MIG support, customers can optimize GPU resource allocation for workloads that don't fully utilize the GPU's compute capacity, such as machine learning inference tasks. Each GPU partition operates with complete hardware-level memory and fault isolation, providing workload separation and reliable performance. NVIDIA Multi-Instance GPU support in Bottlerocket is available in all commercial and AWS GovCloud (US) Regions where compatible NVIDIA GPU-enabled instances are offered. To learn more about MIG with the Bottlerocket NVIDIA variants, see the Bottlerocket User Guide. You can also visit the Bottlerocket product page and explore the Bottlerocket GitHub repository for more information.

Amazon EKS now envelope encrypts all Kubernetes API data by default

Published Date: 2025-03-05 21:20:00

Starting today, Amazon Elastic Kubernetes Service (EKS) enables default envelope encryption for all Kubernetes API data in EKS clusters running Kubernetes version 1.28 or higher. This provides a managed, default experience that implements defense-in-depth for your Kubernetes applications. Using AWS Key Management Service (KMS) with Kubernetes KMS provider v2, EKS now provides an additional layer of security with an AWS owned, KMS encryption key or the option of bringing your own key. Previously, Amazon EKS provided optional envelope encryption with Kubernetes KMS provider v1. Now this is a default configuration for all objects in the Kubernetes API. By default, AWS owns the keys used for envelope encryption. You can alternatively create or import externally generated keys to AWS KMS for use in your cluster’s managed Kubernetes control plane. If you have an existing customer managed key (CMK) in KMS that was previously used to envelope encrypt your Kubernetes Secrets, this same key will now be used for envelope encryption of the additional Kubernetes API data types in your cluster. Default envelope encryption in Amazon EKS is automatically enabled for all EKS clusters running Kubernetes version 1.28 or higher, and doesn’t require any action from customers. This feature is available at no additional charge in all commercial AWS Regions and the AWS GovCloud (US) Regions. To learn more, visit the Amazon EKS documentation.

Bottlerocket now supports AWS Neuron accelerated instance types

Published Date: 2025-03-05 21:00:00

Today, AWS announced that Bottlerocket, the Linux-based operating system purpose-built for containers, now supports AWS Neuron-powered instances with its Amazon Elastic Kubernetes Service (EKS) and Amazon Elastic Container Service (ECS) AMIs. Customers using Bottlerocket AMIs can now deploy and manage machine learning inference and training workloads on AWS Neuron accelerated instance types, including Inf1, Inf2, Trn1, and Trn2. EKS customers can use these Bottlerocket AMIs with Karpenter version 1.2.2 and above. This integration enables automated device management and scheduling capabilities while maintaining Bottlerocket's focus on security and operational simplicity. Customers can leverage the standard Bottlerocket AMIs to deploy workloads on Neuron instances, with the ability to configure device ownership and resource allocation through familiar container orchestration interfaces. Bottlerocket support for AWS Neuron-powered instances is available in all AWS Regions where Inf1, Inf2, Trn1, and Trn2 instances are offered. To get started, see the Bottlerocket User Guide. You can also visit the Bottlerocket product page and explore the Bottlerocket GitHub repository for more information.

Amazon Connect can now target multiple agent proficiencies in a single routing step

Published Date: 2025-03-05 20:55:00

Amazon Connect now offers the ability to target up to 4 different combinations of agent proficiencies per routing step. By using up to 3 OR conditions, routing will try to match a contact with 4 different types of agents and increase the possibility of finding a suitable match. For example, if the back-up for a niche banking skills consists of agents trained on account management, registration, and tax then after an initial search for balance transfer agents you can attempt a match across all four types of agents at the same time. This feature is available in all AWS regions where Amazon Connect is offered. To learn more about routing criteria, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.

Amazon EC2 M8g instances now available in AWS Europe (Ireland)

Published Date: 2025-03-05 20:15:00

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Europe (Ireland) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which o?oads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 M8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Amazon FSx for NetApp ONTAP no longer charges for SnapLock licensing

Published Date: 2025-03-05 18:35:00

Starting March 5, 2025, Amazon FSx for NetApp ONTAP eliminates SnapLock licensing fees for data stored in SnapLock volumes, making it more cost-effective for customers to protect their business-critical data from ransomware, unauthorized deletions, and malicious modifications. SnapLock is an ONTAP feature that offers Write Once, Read Many (WORM) protection to prevent alteration or deletion of data for specified retention periods, enabling customers to meet regulatory compliance and improve data protection. After this billing change, volumes with SnapLock enabled will no longer incur licensing charges. This license removal requires no changes to customer applications and takes effect automatically for both new and existing SnapLock volumes. The removal of SnapLock licensing fees applies to all FSx for ONTAP file systems across all AWS Regions where they are available. To learn more, visit the product page and SnapLock in the user guide. ?

IAM Access Analyzer now supports Internet Protocol Version 6 (IPv6)

Published Date: 2025-03-05 18:00:00

AWS Identity and Access Manager (IAM) Access Analyzer now supports Internet Protocol version 6 (IPv6) addresses via our new dual-stack endpoints. The existing IAM Access Analyzer endpoints supporting IPv4 will remain available for backwards compatibility. The new dual-stack domains are available either from the internet or from within an Amazon Virtual Private Cloud (VPC) using AWS PrivateLink. To learn more on best practices for configuring IPv6 in your environment, visit the whitepaper on IPv6 in AWS. Support for IPv6 on IAM Access Analyzer is available in the AWS Commercial Regions, the AWS GovCloud (US) Regions, and the China Regions. To get started with using IAM Access Analyzer to continuously monitor access to your resources and remove unused permissions, visit our documentation. ?

AWS WAF is now available in two additional AWS regions

Published Date: 2025-03-05 18:00:00

Starting today, you can use AWS WAF in the AWS Asia Pacific (Thailand) and AWS Mexico (Central) Region. AWS WAF is a web application firewall that helps you protect your web application resources against common web exploits and bots that can affect availability, compromise security, or consume excessive resources.? To see the full list of regions where AWS WAF is currently available, visit the AWS Region Table. Please note that only core AWS WAF features like AWS Managed Rules and rules are currently available in these new regions. For more information about the service, visit the AWS WAF page. AWS WAF pricing may vary between regions. For more information about pricing, visit the AWS WAF Pricing page.

Bottlerocket simplifies system setup with default bootstrap container image

Published Date: 2025-03-05 18:00:00

Today, AWS has announced that Bottlerocket, the Linux-based operating system purpose-built for containers, now provides a default bootstrap container image that simplifies system setup tasks, eliminating the need for most customers to maintain their own container images for initial configuration. Bootstrap containers are special-purpose containers that handle pre-startup operations such as directory creation, environment variable setup, and node-specific configurations before the main application containers start. This enhancement allows customers to focus on their startup scripts rather than container image maintenance and regional availability. Previously, customers needed to create, maintain, and update their own container images while managing separate image repositories for each AWS Region. By using Bottlerocket's default bootstrap container image, customers can specify their configuration tasks through simple user data, while the system automatically handles image updates. The default image is maintained by AWS, reducing operational overhead and improving system security. The simplified bootstrap container configuration in Bottlerocket is available in all commercial and AWS GovCloud (US) Regions. To learn more, see the Bottlerocket User Guide. You can also visit the Bottlerocket product page and explore the Bottlerocket GitHub repository for more information. ?

Announcing latency-optimized inference for Amazon Nova Pro foundation model in Amazon Bedrock

Published Date: 2025-03-05 18:00:00

Amazon Nova Pro foundation model now supports latency-optimized inference in preview on Amazon Bedrock, enabling faster response times and improved responsiveness for generative AI applications. Latency-optimized inference speeds up response times for latency-sensitive applications, improving the end-user experience and giving developers more flexibility to optimize performance for their use case. Accessing these capabilities requires no additional setup or model fine-tuning, allowing for immediate enhancement of existing applications with faster response times. Latency optimized inference for Amazon Nova Pro is available via cross-region inference in US West (Oregon), US East (Virginia), and US East (Ohio) regions. Learn more about Amazon Nova foundation models at the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. Learn more about latency optimized inference on Bedrock in documentation. You can get started with Amazon Nova foundation models in Amazon Bedrock from the Amazon Bedrock console.

Amazon EC2 M7g instances are now available in the AWS Europe (Zurich)

Published Date: 2025-03-05 18:00:00

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7g instances are available in the AWS Europe (Zurich) region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS). To learn more, see Amazon EC2 M7g. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console.

Amazon EC2 C8g instances now available in AWS Asia Pacific (Mumbai)

Published Date: 2025-03-05 18:00:00

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Asia Pacific (Mumbai) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the AWS Nitro System, which o?oads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads. AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To learn more, see Amazon EC2 C8g Instances. To explore how to migrate your workloads to Graviton-based instances, see AWS Graviton Fast Start program and Porting Advisor for Graviton. To get started, see the AWS Management Console. ?

Amazon Q Business now supports insights from audio and video data

Published Date: 2025-03-04 22:00:00

Today, we are excited to announce that Amazon Q Business now supports the ingestion of audio and video data. This new feature enables Amazon Q customers to search through ingested audio and video content, allowing them to ask questions based on the information contained within these media files. This enhancement significantly expands the capabilities of Amazon Q Business, making it an even more powerful tool for organizations to access and utilize their multimedia content. Customers can unlock valuable insights from their audio and video resources. Users can now easily search for specific information within recorded meetings, training videos, podcasts, or any other audio or video content ingested into Amazon Q Business. This capability streamlines information retrieval, enhances knowledge sharing, and improves decision-making processes by making multimedia content as searchable and accessible as text-based documents. The audio and video ingestion feature uses the Bedrock Data Automation feature to process customer’s multimodal assets. The feature for Amazon Q Business is available in US East (N. Virginia) and US West (Oregon) AWS Regions. Customers can start using this feature in supported regions to enhance their organization's knowledge management and information discovery processes. To get started with ingesting audio and video data in Amazon Q Business, visit the Amazon Q console or refer to the documentation. For more information about Amazon Q Business and its features, please visit the Amazon Q product page.

SageMaker Hyperpod Flexible Training Plans now supports instant start times and multiple offers

Published Date: 2025-03-04 20:50:00

As of February 14, 2025, SageMaker Flexible Training Plans now supports instant start times that allow customers to book a plan starting as soon as the next 30 minutes. Amazon SageMaker‘s Flexible Training Plan (FTP) makes it easy for customers to access GPU capacity to run ML workloads. Customers who use Flexible Training Plans can plan their ML development cycles with confidence in knowing they’ll have the GPUs they need on a specific date for the amount of time they reserve. There are no long-term commitments, so customers get capacity assurance while only paying for the amount of GPU time necessary to complete their workloads.

With the ability to start a reservation within 30 minutes (subject to availability), Flexible Training Plan accelerates compute resource procurement for customers running machine learning workloads. The system first attempts to find a single, continuous block of reserved capacity that precisely matches a customer’s requirement. If a continuous block isn’t available, SageMaker automatically splits the total duration across two time segments and attempts to fulfill the request using two separate reserved capacity blocks. Additionally, with this release, Flexible Training Plan will return up to three distinct options, providing flexibility in compute resource procurement.

You can create a Training Plan using either the SageMaker AI console or programmatic methods. The SageMaker AI console offers a visual, graphical interface with a comprehensive view of your options, while programmatic creation can be done using the AWS CLI or SageMaker SDKs to interact directly with the training plans API. You can get started with the API experience here.

Amazon Lex launches support for Confirmation and Alphanumeric slot types for Korean

Published Date: 2025-03-04 20:00:00

Amazon Lex now supports Confirmation and Alphanumeric slot types in Korean (ko-KR) locale. These built-in slot types help developers build more natural and efficient conversational experiences in Korean language applications. The Confirmation slot type automatically resolves various Korean expressions into 'Yes', 'No', 'Maybe', and 'Don't know' values, eliminating the need for custom slots with multiple synonyms. The Alphanumeric slot type enables capturing combinations of letters and numbers, with support for regular expressions to validate specific formats, making it easier to collect structured data like identification numbers or reference codes. Korean support for these slot types is available in all AWS regions where Amazon Lex V2 operates. To learn more about implementing these features, visit the Amazon Lex documentation for Custom Vocabulary and Alphanumerics.

AWS Secrets Manager increases the API Requests per Second limits

Published Date: 2025-03-04 19:00:00

AWS Secrets Manager now supports higher request rates for the core set of API operations: GetSecretValue and DescribeSecret. GetSecretValue now supports up to 10,000 requests per second and DescribeSecret supports 40,000 requests per second. The increased API limits are available at no additional cost and will automatically be applied to your AWS accounts. No further action required on your end. Increased API limits for GetSecretValue and DescribeSecret are available in all regions where the service operates. For a list of regions where Secrets Manager is available, see the AWS Region table. To learn more about Secrets Manager API operations, visit our API reference.

AWS Transfer Family announces reduced login latency for SFTP servers

Published Date: 2025-03-04 18:00:00

AWS Transfer Family has reduced the service side login latency from 1-2 seconds to under 500 milliseconds. AWS Transfer Family offers fully managed support for the transfer of files over SFTP, AS2, FTPS, FTP, and web browser-based transfers directly into and out of AWS storage services. With this launch, you benefit from significantly reduced latency from the service to initiate the transfer over SFTP. This optimization offers substantial benefits, particularly for high-frequency, low-latency use cases with automated processes or applications requiring rapid file operations. Reduced server-side login latency is immediately available at no additional cost for all new and existing Transfer Family SFTP servers in all AWS Regions where the service is available. To create an SFTP server, visit the Transfer Family User Guide. ?

AWS Lambda adds support for Amazon CloudWatch Logs Live Tail in VS Code IDE

Published Date: 2025-03-04 18:00:00

AWS Lambda now supports Amazon CloudWatch Logs Live Tail in VS Code IDE through the AWS Toolkit for Visual Studio Code. Live Tail is an interactive log streaming and analytics capability which provides real-time visibility into logs, making it easier to develop and troubleshoot Lambda functions. We previously announced support for Live Tail in the Lambda console, enabling developers to view and analyze Lambda logs in real time. Now, with Live Tail support in VS Code IDE, developers can monitor Lambda function logs in real time while staying within their development environment, eliminating the need to switch between multiple interfaces for coding and log analysis. This makes it easier for developers to quickly test and validate code or configuration changes in real time, accelerating the author-test-deploy cycle when building applications using Lambda. This integration also makes it easier to detect and debug failures and critical errors in Lambda function code, reducing the mean time to recovery (MTTR) when troubleshooting Lambda function errors. Using Live Tail for Lambda in VS Code IDE is straightforward. After installing the latest version of the AWS Toolkit for Visual Studio Code, developers can access Live Tail through the AWS Explorer panel. Simply navigate to the desired Lambda function, right-click, and select "Tail Logs" to begin streaming logs in real time. To learn more about using Live Tail for Lambda in VS Code IDE, visit the AWS Toolkit developer guide. To learn more about CloudWatch Logs Live Tail, visit CloudWatch Logs developer guide. ?

Amazon Neptune Database is now available in AWS Asia Pacific (Malaysia) Region

Published Date: 2025-03-04 18:00:00

Amazon Neptune Database is now available in the Asia Pacific (Malaysia) Region on engine versions 1.4.3.0 and later. You can now create Neptune clusters using R6g, R6i, T4g, and T3 instance types in the AWS Asia Pacific (Malaysia) Region. Amazon Neptune Database is a fast, reliable, and fully managed graph database as a service that makes it easy to build and run applications work with highly connected datasets. You can build applications using Apache TinkerPop Gremlin or openCypher on the Property Graph model, or using the SPARQL query language on W3C Resource Description Framework (RDF). Neptune also offers enterprise features such as high availability, automated backups, and network isolation to help customers quickly deploy applications to production. To get started, you can create a new Neptune cluster using the AWS Management Console, AWS CLI, or a quickstart AWS CloudFormation template. For more information on pricing and region availability, refer to the Neptune pricing page and AWS Region Table. ?

AWS CodeBuild now supports non-container builds in on-demand fleets

Published Date: 2025-03-04 18:00:00

AWS CodeBuild now supports non-container builds on Linux x86, Arm, and Windows on-demand fleets. You can run build commands directly on the host operating system without containerization. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. With non-container builds, you can execute build commands that require direct access to the host system resources or have specific requirements that make containerization challenging. This feature is particularly useful for scenarios such as building device drivers, running system-level tests, or working with tools that require host machine access. The non-container feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. To learn more about non-container builds, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page. ?

Amazon S3 Tables are now available in three additional AWS Regions

Published Date: 2025-03-04 18:00:00

Amazon S3 Tables are now available in three additional AWS Regions: Asia Pacific (Seoul), Asia Pacific (Singapore), and Asia Pacific (Sydney). S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query performance through continual table optimization compared to unmanaged Iceberg tables, and up to 10x higher transactions per second compared to Iceberg tables stored in general purpose S3 buckets. You can use S3 Tables with AWS analytics services through the preview integration with Amazon SageMaker Lakehouse, as well as Apache Iceberg-compatible open source engines like Apache Spark and Apache Flink. Additionally, S3 Tables perform continual table maintenance to automatically expire old snapshots and related data files to reduce storage cost over time. S3 Tables are now generally available in eleven AWS Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog. ?

Amazon API Gateway now supports HTTP APIs, mTLS, multi-level base path mappings, and WAF in additional regions

Published Date: 2025-03-03 22:30:00

Amazon API Gateway (APIGW) now supports all features of HTTP APIs as well as Mutual TLS and multi-level base path mappings on REST APIs in the following additional Regions: Middle East (UAE), Asia Pacific (Jakarta), Asia Pacific (Osaka), Asia Pacific (Hyderabad), Asia Pacific (Melbourne), Europe (Zurich), Europe (Spain), Israel (Tel Aviv), and Canada West (Calgary). AWS Web Application Firewall (WAF) for REST APIs is now available in two additional regions: Asia Pacific (Kuala Lumpur) and Canada West (Calgary). HTTP APIs simplify API development for serverless applications with a simpler user interface that includes support for OAuth2.0 and automatic deployments. Mutual TLS enhances security by authenticating x509 certificate based identities at the APIGW. Multi-level base path mappings enable routing requests based on segments in custom domain paths, supporting path-based versioning and traffic redirection. Integration of AWS WAF offers APIs protections against common web exploits through configurable rules that allow, block, or monitor web requests. To learn more, see API Gateway developer guide.

Amazon Bedrock Data Automation is now generally available

Published Date: 2025-03-03 18:30:00

Today, we are announcing the general availability of Amazon Bedrock Data Automation (BDA), a feature of Amazon Bedrock that enables developers to automate the generation of valuable insights from unstructured multimodal content such as documents, images, video, and audio to build GenAI-based applications. By leveraging BDA, developers can reduce development time and effort, making it easier to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions. BDA can be used as a standalone feature or as a parser in Amazon Knowledge Bases RAG workflows. Further, Amazon Q Business now uses BDA to process multimodal assets and deliver insights. In this GA release, we improved document accuracy across a variety of document types, enhanced scene-level and full video summarization accuracy, added support for detection of 35,000+ company logos in images and videos, and added support for AWS cross-region inference to optimize routing across regions within your geography to maximize throughput. BDA also added a number of security, governance, and manageability capabilities such as AWS Key Management Service (KMS) Customer Managed Keys (CMKs) support for encryption, AWS PrivateLink to connect directly to the BDA APIs in your virtual private cloud (VPC) instead of connecting over the internet, and tagging of BDA resources and jobs to track costs and enforce tag-based access policies in Amazon Identity and Access Management (IAM). Amazon Bedrock Data Automation is now generally available in the US West (Oregon) and US East (N. Virginia) AWS Regions. To learn more, visit the Bedrock Data Automation page.

Announcing managed integrations for AWS IoT Device Management (Preview)

Published Date: 2025-03-03 18:00:00

Today, AWS IoT Device Management announces the preview of managed integrations, a new feature that enables you to simplify control and management of a diverse set of devices across multiple manufacturers and connectivity protocols. The new feature helps you streamline cloud onboarding of Internet of Things (IoT) devices and enables you to control both self-managed and third-party devices, including cloud-based devices, from a single application. Managed integrations provides cloud and device Software Development Kits (SDKs) for device connectivity and protocol support for ZigBee, Z-Wave, and Wi-Fi specifications, eliminating the need to handle dedicated connectivity protocols from different manufacturers separately. A unified API coupled with a catalog of cloud-to-cloud connectors and 80+ device data model templates enable you to control both proprietary and third-party devices from a single application. Additionally, you can easily process and integrate device data from those devices for building home security, energy management, and elderly care monitoring solutions. Managed integrations for AWS IoT Device Management also provides built-in capabilities for barcode scanning and direct pairing of devices, delivering additional mechanisms to simplify device onboarding and integration complexities. The managed integrations feature is available in Canada (Central) and Europe (Ireland) AWS Regions. To learn more, see technical documentation and read this blog. To get started, log in to the AWS IoT console or use the AWS Command Line Interface (AWS CLI).

Announcing AWS Outposts racks for high throughput, network-intensive workloads (Preview)

Published Date: 2025-03-03 18:00:00

AWS announces the preview of new AWS Outposts racks designed specifically for on-premises high throughput, network-intensive workloads. With these new Outposts racks, telecom service providers (telcos) can extend AWS infrastructure and services to telco locations, enabling them to deploy on-premises network functions requiring low latency, high throughput, and real-time performance. The new Outposts racks feature new Amazon Elastic Compute Cloud (Amazon EC2) 4th Generation Intel Xeon Scalable-based (Sapphire Rapids) bare metal instances along with a high-performance bare metal network fabric. This architecture delivers the low latency and high throughput required for demanding 5G workloads, such as User Plane Function (UPF) and Radio Access Network (RAN) Central Unit (CU) network functions. Telcos can now use Amazon EKS (Elastic Kubernetes Service) and built-in EKS add-ons to automate deployment and scaling of micro-services based 5G network functions for high throughput and performance. Telcos can now use the same AWS infrastructure, AWS services, APIs, tools, and a common continuous integration and continuous delivery (CI/CD) pipeline wherever their workloads reside. This consistent cloud experience eases operational burden, reduces integration costs, and maximizes new feature development velocity for operators. The new AWS Outposts racks are currently available in preview in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), and Asia Pacific (Singapore). ?

Amazon QuickSight now available in the AWS GovCloud (US-East) Region

Published Date: 2025-03-03 18:00:00

Amazon QuickSight is now available in the AWS GovCloud (US-East) Region. AWS GovCloud (US) Regions are isolated AWS Regions designed to host sensitive data and regulated workloads in the cloud, assisting customers who have United States federal, state, or local government compliance requirements. Amazon QuickSight is a fast, scalable, and fully managed Business Intelligence service that lets you easily create and publish interactive dashboards across your organization. QuickSight dashboards can be authored on any modern web browser with no clients to install or manage; dashboards can be shared with 10s of 1000s of users without the need to provision or manage any infrastructure. QuickSight dashboards can also be seamlessly embedded into your applications, portals, and websites to provide rich, interactive analytics for end-users. With this launch, QuickSight expands to 22 regions, including: US East (Ohio and N. Virginia), US West (Oregon), Europe (Stockholm, Paris, Frankfurt, Ireland, London, Milan and Zurich), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Beijing, Tokyo and Jakarta), Canada (Central), South America (S?o Paulo), Africa (Cape Town) and AWS GovCloud (US-East, US-West). To learn more about Amazon QuickSight, please see our product page, documentation and available regions here.

AWS Amplify supports HttpOnly cookies for server-rendered Next.js applications

Published Date: 2025-03-03 18:00:00

AWS Amplify now supports HttpOnly cookies for server-rendered Next.js applications when using Amazon Cognito's Managed Login. This enhancement builds upon existing cookie functionality in server-rendered sites, opting in to the HttpOnly attribute strengthens your application's security posture by blocking client-side JavaScript from accessing cookie contents. With HttpOnly cookies, your applications gain an additional layer of protection against cross-site scripting (XSS) attacks. This ensures that sensitive information remains secure and will only be transmitted between the browser and the server, and is particularly valuable when handling authentication tokens in your web applications. The contents of cookies with HttpOnly attributes can only be read by the server, requiring your requests to flow through the server before reaching other services. This feature is now available in all AWS regions where AWS Amplify and Amazon Cognito are supported. To learn more, visit the AWS Amplify documentation for Server-Side Rendering.

Amazon Connect outbound campaigns now supports Brazil

Published Date: 2025-03-03 18:00:00

Amazon Connect now supports outbound campaign calling to Brazil in the US East (Virginia) and US West (Oregon) regions, making it easier to proactively communicate across voice, SMS, and email for use cases such as delivery notifications, marketing promotions, appointment reminders, or debt collection. Communication capabilities include features such as point-of-dial checks, calling controls for time of day, time zone, number of attempts per contact, and predictive dialing with integrated voicemail detection. A list management capability provided by Amazon Pinpoint can also be used to build customer journeys and multi-channel user contact experiences. Outbound campaigns can be enabled within the AWS Connect Console. With Amazon Connect outbound campaigns, you only pay-as-you-go for the high-volume outbound service usage, associated telephony charges and any monthly target audience charges via Amazon Pinpoint. To learn more, visit our webpage.

Amazon Bedrock now available in the Europe (Stockholm) region

Published Date: 2025-03-03 18:00:00

Customers can use regional processing profiles for Amazon Nova understanding models (Amazon Nova Lite, Amazon Nova Micro, and Amazon Nova Pro) in Europe (Stockholm).

Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

Amazon Cognito now supports access token customization for machine-to-machine (M2M) authorization flows

Published Date: 2025-03-03 18:00:00

Amazon Cognito now allows customers to customize access tokens for M2M flows, enabling you to implement fine-grained authorization in your applications, APIs, and workloads. M2M authorization is commonly used for automated processes such as scheduled data synchronization tasks, event-driven workflows, microservices communication, or real-time data streaming between systems. In M2M authorization flows, an app client can represent a software system or service that can request access tokens to interact with resources, such as a reporting system or a data processing service. With this launch, customers can now customize their access tokens with custom claims (attributes about the app client) and scopes (level of access that an app client can request to a resource), making it easier to control and manage how their automated systems interact with each other. Customers can now add custom attributes directly in access tokens, reducing the complexity of authorization logic needed in their application code. For example, customers can customize access tokens with claims that allow an app client for a reporting system to only read data while allowing an app client for a data processing service to both read and modify data. This allows customers to streamline authentication by embedding custom authorization attributes directly into access tokens during the token issuance process. Access token customization for M2M authorization is available to Amazon Cognito customers using Essentials or Plus tiers in all AWS Regions where Cognito is available, except the AWS GovCloud (US) Regions. To learn more, refer to the developer guide. ?

Amazon CloudWatch RUM introduces resource-based policy support for data ingestion access

Published Date: 2025-03-03 18:00:00

CloudWatch RUM, which provides real-time monitoring into web application performance by tracking user interactions, now supports resource based policies that simplify access for data ingestion to RUM. With resource-based policies, you can specify which Identity and Access Management (IAM) principals have access to ingest data to your RUM app monitors— effectively which clients can write data to RUM. This would also allow you to ingest data at higher volume and gives you greater control over data ingress in RUM. Using resource based policies allows you to manage ingestion access to your app monitor without using Amazon Cognito to assume an IAM role, and AWS Security Token Service (STS) to obtain security credentials to write data to CloudWatch RUM. This is beneficial for high throughput use cases where a high volume of requests may be subject to Cognito’s quota limits leading to throttling and potentially failure in ingesting data to RUM. With a public resource policy, no such limits apply. Anyone can send data to CloudWatch RUM including unauthenticated users and clients. In addition, you can use AWS Global context keys to use these policies to block certain IPs or disable clients sending data to RUM. You can configure these policies on the AWS console or via code using AWS CloudFormation. These enhancements are available in all regions where CloudWatch RUM is available at no additional cost to users. See documentation to know more about the feature, or see user guide to learn how to configure resource based policies for CloudWatch RUM. ?

AWS CodeBuild adds support for Node 22, Python 3.13, and Go 1.23

Published Date: 2025-03-03 18:00:00

AWS CodeBuild managed images now support Node 22, Python 3.13, and Go 1.23. These new runtime versions are available in Linux x86, Arm, Windows and macOS platforms. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. For CodeBuild managed images based on Linux, you can specify a runtime of your choice in the runtime-versions section of your buildspec file. You can select specific major and minor versions supported by CodeBuild, or define a custom runtime version. Additionally with this release, we added commonly used tools that are available in GitHub Actions environments to better support customers using CodeBuild as a self-hosted runner option. The updated images are available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. To learn more about docker images and runtime versions provided by CodeBuild, please visit our documentation or our image repository. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page. ?

要查看或添加评论,请登录

Ankur Patel的更多文章

  • 2025 - Week 9 (24 Feb - 2 Mar)

    2025 - Week 9 (24 Feb - 2 Mar)

    Amazon Connect launches the ability for agents to exchange shifts with each other Published Date: 2025-02-28 22:10:00…

  • 2025 - Week 8 (17 Feb - 23 Feb)

    2025 - Week 8 (17 Feb - 23 Feb)

    Certificate-Based Authentication is now available on Amazon AppStream 2.0 multi-session fleets Published Date:…

  • 2025 - Week 7 (10 Feb - 16 Feb)

    2025 - Week 7 (10 Feb - 16 Feb)

    Amazon SES now offers tiered pricing for Virtual Deliverability Manager Published Date: 2025-02-14 19:30:00 Today…

  • 2025 - Week 6 (3 Feb - 9 Feb)

    2025 - Week 6 (3 Feb - 9 Feb)

    AWS Step Functions expands data source and output options for Distributed Map Published Date: 2025-02-07 22:50:00 AWS…

  • 2025 - Week 5 (27 Jan - 2 Feb)

    2025 - Week 5 (27 Jan - 2 Feb)

    AWS Transfer Family web apps are now available in 20 additional Regions Published Date: 2025-01-31 21:25:00 AWS…

  • 2025 - Week 4 (20 Jan - 26 Jan)

    2025 - Week 4 (20 Jan - 26 Jan)

    AWS announces new edge location in the Kingdom of Saudi Arabia Published Date: 2025-01-24 22:40:00 Amazon Web Services…

  • 2025 - Week 3 (13 Jan - 19 Jan)

    2025 - Week 3 (13 Jan - 19 Jan)

    AWS CodeBuild now supports test splitting and parallelism Published Date: 2025-01-17 22:50:00 You can now split your…

  • 2025 - Week 2 (6 Jan - 12 Jan)

    2025 - Week 2 (6 Jan - 12 Jan)

    Amazon Connect Contact Lens launches agent performance evaluations for email contacts Published Date: 2025-01-10…

  • Kickstarting 2025 - Week 1 (30 Dec - 5 Jan)

    Kickstarting 2025 - Week 1 (30 Dec - 5 Jan)

    Amazon FSx for NetApp ONTAP is now available in the AWS Asia Pacific (Malaysia) Region Published Date: 2025-01-03…

  • Week 52 (23 Dec - 29 Dec)

    Week 52 (23 Dec - 29 Dec)

    Amazon Aurora now supports PostgreSQL 16.6, 15.