2025 - Week 8 (17 Feb - 23 Feb)
Ankur Patel
3x AWS? certified | AWS Community Builder | Cloud Enabler and Practitioner | Solutions Architect | FullStack | DevOps | DSML | 6x Sisense certified | Blogger | Photographer & Traveller
Certificate-Based Authentication is now available on Amazon AppStream 2.0 multi-session fleets
Published Date: 2025-02-21 22:30:00
Amazon AppStream 2.0 improves the end-user experience by adding support for certificate-based authentication (CBA) on multi-session fleets running the Microsoft Windows operating system and joined to an Active Directory. This functionality helps administrators to leverage the cost benefits of the multi-session model while providing an enhanced end-user experience. By combining these enhancements with the existing advantages of multi-session fleets, AppStream 2.0 offers a solution that helps balance cost-efficiency and user satisfaction. By using certificate-based authentication, you can rely on the security and logon experience features of your SAML 2.0 identity provider, such as passwordless authentication, to access AppStream 2.0 resources. Certificate-based authentication with AppStream 2.0 enables a single sign-on logon experience to access domain-joined desktop and application streaming sessions without separate password prompts for Active Directory. This feature is available at no additional cost in all the AWS Regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0. To enable this feature for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after February 7, 2025?or your image is using Managed AppStream 2.0 image updates released on or after February 11, 2025. ?
AWS CodePipeline adds native Amazon EC2 deployment support
Published Date: 2025-02-21 18:00:00
AWS CodePipeline introduces a new action to deploy to Amazon Elastic Compute Cloud (EC2). This action enables you to easily deploy your application to a group of EC2 instances behind load balancers. Previously, if you wanted to deploy to EC2 instances, you had to use CodeDeploy with an AppSpec file to configure the deployment. Now, you can simply use this new EC2 deploy action in your pipeline to deploy to EC2 instances, without the necessity of managing CodeDeploy resources. This streamlined approach reduces your operational overhead and simplifies your deployment process. To learn more about using the EC2 deploy action in your pipeline, visit our tutorial and documentation. For more information about AWS CodePipeline, visit our product page. This new action is available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions. ?
AWS Database Migration Service now supports Multi-ENI networking for homogeneous migrations.
Published Date: 2025-02-21 18:00:00
Amazon Database Migration Service (DMS) now supports the Multi-ENI networking model and Credentials Vending System for DMS Homogenous Migrations. Customers can now choose the Multi-ENI connection type and use the Credentials Vending System, providing a simplified networking configuration experience for secure connectivity to their on-premises database instances.
For information see documentation for AWS DMS Homogeneous Migrations. For AWS DMS regional availability, please refer to the AWS Region Table. ?
Amazon RDS for PostgreSQL supports minor versions 17.4, 16.8, 15.12, 14.17, 13.20
Published Date: 2025-02-21 18:00:00
Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 17.4, 16.8, 15.12, 14.17, and 13.20. Please note, this release supports the versions released by the PostgreSQL community on February, 20,2025 to address the regression that was part of the February 13, 2025 release. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community. You can use automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also use Amazon RDS Blue/Green deployments for RDS for PostgreSQL using physical replication for your minor version upgrades. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments in the Amazon RDS User Guide. Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console. ?
Amazon MSK adds support for Apache Kafka version 3.8
Published Date: 2025-02-21 18:00:00
Amazon Managed Streaming for Apache Kafka (Amazon MSK) now supports Apache Kafka version 3.8. You can now create new clusters using version 3.8 with either KRAFT or ZooKeeper mode for metadata management or upgrade your existing ZooKeeper based clusters to use version 3.8. Apache Kafka version 3.8 includes several bug fixes and new features that improve performance. Key new features include support for compression level configuration. This allows you to further optimize your performance when using compression types such as lz4, zstd and gzip, by allowing you to change the default compression level. For more details and a complete list of improvements and bug fixes, see the Apache Kafka release notes for version 3.8. Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK is compatible with Apache Kafka, which enables you to quickly migrate your existing Apache Kafka workloads to Amazon MSK with confidence or build new ones from scratch. With Amazon MSK, you can spend more time innovating on streaming applications and less time managing Apache Kafka clusters. To learn how to get started, see the Amazon MSK Developer Guide. Support for Apache Kafka version 3.8 is offered in all AWS regions where Amazon MSK is available.
Announcing fine-grained access control via AWS Lake Formation with EMR on EKS
Published Date: 2025-02-21 18:00:00
We are excited to announce the general availability of fine-grained data access control (FGAC) via AWS Lake Formation for Apache Spark with Amazon EMR on EKS. This enables you to enforce full FGAC policies (database, table, column, row, and cell-level) defined in Lake Formation for your data lake tables from EMR on EKS Spark jobs. We are also sharing the general availability of Glue Data Catalog views with EMR on EKS for Spark workflows.
Lake Formation simplifies building, securing, and managing data lakes by allowing you to define fine-grained access controls through grant and revoke statements, similar to RDBMS. The same Lake Formation rules now apply to Spark jobs on EMR on EKS for Hudi, Delta Lake, and Iceberg table formats, further simplifying data lake security and governance. AWS Glue Data Catalog views with EMR on EKS allows customers to create views from Spark jobs that can be queried from multiple engines without requiring access to referenced tables. Administrators can control underlying data access using the rich SQL dialect provided by EMR on EKS Spark jobs. Access is managed with AWS Lake Formation permissions, including named resource grants, data filters, and lake formation tags. All requests are logged in AWS CloudTrail. Fine-grained access control for Apache Spark batch jobs on EMR on EKS is available with the EMR 7.7 release in all regions where EMR on EKS is available. To get started, see Using AWS Lake Formation with Amazon EMR on EKS. ?
You can now use your China UnionPay credit card to create an AWS account
Published Date: 2025-02-21 18:00:00
Amazon Web Services, Inc. now supports China UnionPay credit cards for creating new AWS accounts, eliminating the need for international credit cards for customers in China. To use China UnionPay for creating your AWS account, enter your address and billing country in China, then provide your local China UnionPay credit card details and verify your personal identity or business license. All subsequent AWS charges will be billed in Chinese Yuan currency, providing convenient payment experience for customers in China. To get started, select China UnionPay as your payment method when creating a new AWS account. For more information on using China UnionPay credit cards with AWS, visit Set up a Chinese yuan credit card. ?
AWS CodePipeline adds native Amazon EKS deployment support
Published Date: 2025-02-20 18:35:00
AWS CodePipeline introduces a new action to deploy to Amazon Elastic Kubernetes Service (Amazon EKS). This action enables you to easily deploy your container applications to your EKS clusters, including those in private VPCs. Previously, if you wanted to deploy to a EKS cluster within a private network, you had to initialize and maintain a compute environment within the private network. Now, you can simply provide the name of the EKS cluster and add this action to your pipeline. The pipeline will automatically establish a connection into your private network to deploy your container application, without additional infrastructure needed. This streamlined approach reduces your operational overhead and simplifies your deployment process. To learn more about using the EKS action in your pipeline, visit our tutorial and documentation. For more information about AWS CodePipeline, visit our product page. This new action is available in all regions where AWS CodePipeline is supported, except the AWS GovCloud (US) Regions and the China Regions.
Amazon Bedrock now available in Asia Pacific (Hyderabad) and Asia Pacific (Osaka) regions
Published Date: 2025-02-20 18:00:00
Beginning today, customers can use Amazon Bedrock in the Asia Pacific (Hyderabad) and Asia Pacific (Osaka) regions to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details. ?
Amazon Elastic Beanstalk now supports Windows Server 2025 and Windows Server Core 2025 environments
Published Date: 2025-02-20 18:00:00
AWS Elastic Beanstalk now enables customers to deploy applications on Windows Server 2025 and Windows Server Core 2025 environments. These environments come pre-configured with .NET Framework 4.8.1 and .NET 8.0, providing developers with the latest Long Term Support (LTS) version of .NET alongside the established .NET Framework
Windows Server 2025 and Windows Server Core 2025 delivers enhanced security features and performance improvements. Developers can create Elastic Beanstalk environments on Windows Server 2025 using the Elastic Beanstalk Console, CLI, API, or AWS Toolkit for Visual Studio.
This platform is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.
For more information about .NET on Windows Server platforms, see the Elastic Beanstalk developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.
Amazon EC2 G6e instances now available in Stockholm region
Published Date: 2025-02-20 18:00:00
Starting today, the Amazon EC2 G6e instances powered by NVIDIA L40S Tensor Core GPUs is now available in Europe (Stockholm) region. G6e instances can be used for a wide range of machine learning and spatial computing use cases. Customers can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio. Additionally, the G6e instances will unlock customers’ ability to create larger, more immersive 3D simulations and digital twins for spatial computing workloads. G6e instances feature up to 8 NVIDIA L40S Tensor Core GPUs with 48 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 400 Gbps of network bandwidth, up to 1.536 TB of system memory, and up to 7.6 TB of local NVMe SSD storage. Developers can run AI inference workloads on G6e instances using AWS Deep Learning AMIs, AWS Deep Learning Containers, or managed services such as Amazon Elastic Kubernetes Service (Amazon EKS), AWS Batch, and Amazon SageMaker. Amazon EC2 G6e instances are available today in the AWS US East (N. Virginia, Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Frankfurt, Spain, Stockholm) regions. Customers can purchase G6e instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6e instance page.
AWS announces Backup Payment Methods for invoices
Published Date: 2025-02-20 18:00:00
Today, AWS announces the introduction of Backup Payment Methods for AWS invoices in all commercial AWS Regions. This feature enables customers to set up alternate payment methods that will be automatically charged for their invoices if the primary payment method fails. This will help customers make timely invoice payments without the need for manual intervention or last-minute payment updates. There are several benefits this feature brings to AWS customers. Firstly, it reduces the risk of missed or late payments due to issues with the primary payment method. Backup payment method provides peace of mind, knowing that there's a fallback payment method in place for invoice payments, reducing the risk of failed invoice payments. This can help maintain uninterrupted access to AWS services and avoid potential service disruptions. Secondly, it saves time and effort for customers by eliminating the need to manually update payment details or coordinate with their finance and accounting teams to settle invoices when the primary method fails. To get started with Backup Payment Methods, customers can access their AWS Console and navigate to the billing section. From there, they can set their preferences for backup payment methods at any time. For more information on how to set up and manage Backup Payment Methods, please visit the AWS Billing and Cost Management documentation page or contact your AWS account representative.
SES Outbound now delivers to Mail Manager Archives
Published Date: 2025-02-19 22:55:00
Amazon Simple Email Service (SES) announces that Outbound customers can now specify a Mail Manager archive resource as an additional destination for outbound mail workloads. This enables retention of messages post-DKIM-signature, ensuring that the archive is usable for validating every individual sent message. The Mail Manager archive search interface allows easy discovery of indexed messages and presents search results directly in the AWS console, or makes them available to export to the customer’s chosen S3 bucket. SES Outbound customers using APIv2 now have access to a new parameter in their configuration set which specifies a Mail Manager archive ARN in the same . Once the necessary role permissions in IAM are configured, the user initiating the Outbound workload will see a copy of each uniquely-signed outgoing message ingested into the destination archive, where it will be indexed and made available for both search and export. This new feature is billed at the existing Mail Manager Archive price points. This capability is available for Outbound customers in all 17 AWS Regions where Mail Manager is launched. Customers can learn more about SES Outbound and Mail Manager here.
AWS Network Firewall introduces automated domain lists and insights
Published Date: 2025-02-19 21:55:00
AWS Network Firewall now offers automated domain lists and insights, a feature that enhances visibility into network traffic and simplifies firewall rule configuration. This new capability analyzes HTTP and HTTPS traffic logs from the last 30 days and provides insights into frequently accessed domains, enabling quick rule creation based on observed network traffic patterns. Many organizations now use allow-list policies to limit access to approved destinations only. Automated domain lists reduce the time and effort required to identify necessary domains, configure initial rules, and update allow lists as business needs change. This feature helps quickly identify legitimate traffic while maintaining a restrictive default stance, balancing security with operational efficiency. This feature is supported in all AWS Regions where AWS Network Firewall is available today. There is no additional cost to generate automated domain lists and insights on AWS Network Firewall. To get started, visit the AWS Network Firewall console and enable analysis mode for your firewall. For more information, please refer to the AWS Network Firewall service documentation.
Amazon ECS increases the CPU limit for ECS tasks to 192 vCPUs
Published Date: 2025-02-19 18:00:00
Amazon Elastic Container Service (Amazon ECS) now supports CPU limits of up to 192 vCPU for ECS tasks deployed on Amazon Elastic Compute Cloud (Amazon EC2) instances, an increase from the previous 10 vCPU limit. This enhancement allows customers to more effectively manage resource allocation on larger Amazon EC2 instances. Amazon ECS customers can define soft and hard limits for CPU and memory resources at the container level, and hard limits at the task level. Soft limits reserve resources on an Amazon EC2 instance for a container, while hard limits enforce maximum usage. For CPU specifically, the container-level hard limit acts as a ceiling and helps prevent resource contention when multiple containers are competing for resources using Linux CpuShares. The task-level CPU limit acts both as the reservation for the task and prevents any single task from consuming excessive resources during contentions. Customers can now specify up to 192 vCPU as the CPU limit for an ECS task, increased from the previous 10 vCPU, enabling more effective resource sharing across multiple tasks on larger sized EC2 instances. For example, on a c7i.48xl instance with 192 vCPUs, defining a 32 vCPU limit per ECS task allows running up to 6 tasks without resource contention from noisy neighbors. You can use AWS management console, SDK, CLI, CloudFormation, or CDK to define the CPU limit for your Amazon ECS task definition. The new limit is now effective in all regions. To learn more, see documentation. ?
Announcing AWS DMS Serverless comprehensive premigration assessments
Published Date: 2025-02-19 18:00:00
AWS Database Migration Service Serverless (AWS DMS Serverless) now supports premigration assessments for replications . A premigration assessment evaluates the source and target databases of a database migration task to help identify problems that might prevent a migration from running as expected. By identifying and fixing these issues before a migration starts, you can avoid delays in completing the database migration. The premigration assessments will obtain detailed information about the source schema and tables to provide recommendations on the AWS DMS settings that should be used. For example, the assessment can suggest which method of reading redo logs for change data capture (CDC) should be used or it could check if the recommended settings have been enabled, providing best practice recommendations from AWS DMS experts. To learn more, see Enabling and working with premigration assessments. AWS DMS Serverless premigration assessments are available in all AWS Regions where DMS Serverless is available. For AWS DMS Serverless regional availability, please refer to the AWS Region Table. ?
Amazon RDS for MySQL supports new minor versions 8.0.41 and 8.4.4
Published Date: 2025-02-19 18:00:00
Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor versions 8.0.41 and 8.4.4. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.41 and 8.4.4 in the Amazon RDS user guide. You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide. Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL database in the Amazon RDS Management Console. ?
Amazon EC2 R6a instances now available in Canada (Central)
Published Date: 2025-02-18 22:00:00
Starting today, the memory-optimized Amazon EC2 R6a instances are now available in Canada (Central) region. R6a instances are powered by third-generation AMD EPYC processors, and deliver up to 35% better price performance than comparable R5a instances. These instances offer 10% lower cost than comparable x86-based EC2 instances. With this additional region, R6a instances are available in the following AWS Regions: US East (Northern Virginia, Ohio), US West (Oregon, N. California), Asia Pacific (Mumbai, Hyderabad, Singapore, Sydney, Tokyo), Canada (Central), and Europe (Frankfurt, Ireland). These instances can be purchased as Savings Plans, Reserved, On-Demand, and Spot instances. To get started, visit the AWS Management Console, AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the R6a instances pages.
AWS WAF enhances Data Protection and logging experience
Published Date: 2025-02-18 19:12:00
AWS WAF expands its Data Protection capabilities with new controls for sensitive data in logs. In addition, we have updated the Logging configuration console experience, making it easier for customer to select the optimal logging option. Data Protection works together with existing Logging Redaction and Filtering features. You can select which protection method to use based on your use case and where you need to apply the controls. When configured, selected request log fields can be replaced with cryptographic hashes (e.g. ‘ade099751d2ea9f3393f0f’) or a predefined static string (‘REDACTED’) before logs are sent to WAF Sample Logs, Amazon Security Lake, CloudWatch, or other logging destinations. This centralized approach is designed to simplify the management of data and reduces the risk of accidental exposure. In addition, we simplified the WAF console experience for managing logging configurations. Customers can now view all available logging options and select their preferred settings in a simple unified experience. This feature is available in all AWS Regions and endpoints where AWS WAF is available. To learn more, see the AWS WAF developer guide. There is no additional cost for using this feature, however standard AWS WAF charges still apply. For details, visit the AWS WAF Pricing page. To use the new Data Protection feature, simply navigate to your Web ACL 'Logging and metrics' section in the AWS WAF console and choose the desired data protection option. Existing logging configurations will remain unchanged. For more information about the Data Protection, visit AWS documentation.
AWS Storage Gateway is now available in AWS Mexico (Central) Region
Published Date: 2025-02-18 18:00:00
AWS Storage Gateway expands availability to the AWS Mexico (Central) Region enabling customers to deploy and manage hybrid cloud storage for their on-premises workloads. AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud. Visit the AWS Storage Gateway product page to learn more. Access the AWS Storage Gateway console to get started. To see all the Regions where AWS Storage Gateway is available, please visit the AWS Region table. ?
Amazon Timestream for InfluxDB Adds Read Replica support
Published Date: 2025-02-18 18:00:00
Amazon Timestream for InfluxDB now supports Read Replicas, enabling customers to scale their read operations across multiple instances and Availability Zones. Customers can activate a Read Replica via the AWS Marketplace from the Timestream AWS Management Console while creating a Timestream for InfluxDB instance. Adding Read Replicas allow customers to support higher read throughput by distributing read requests across multiple database instances while maintaining a single write endpoint. This helps customers meet the demands of read-intensive workloads, such as real-time analytics and monitoring applications, and improve application performance and availability. Customers can create a Multi-AZ Read Replica cluster with just a few clicks in the AWS Management Console. Support for Timestream for InfluxDB Read Replicas is available the following Timestream for InfluxDB Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), and Europe (Stockholm). To learn more about Amazon Timestream for InfluxDB, please refer to our user guide. You can create a Amazon Timestream for InfluxDB Read Replicas cluster from the Amazon Timestream console, AWS Command line Interface (CLI), or SDK, and AWS CloudFormation. To learn more about the Read Replica Cluster for Amazon Timestream for InfluxDB, visit the product page, documentation, and pricing page.
Amplify Hosting announces support for IAM roles for server-side rendered (SSR) applications
Published Date: 2025-02-18 18:00:00
We're excited to announce the launch of IAM compute roles for AWS Amplify Hosting, enabling secure connections to other AWS resources from server-side rendered applications. This allows developers to integrate their SSR applications with AWS services while maintaining robust security practices.
Use Cases Unlocked:
This feature is available in all 20 AWS Amplify Hosting regions: US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Osaka) Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Milan), Europe (Ireland), Europe (London), Europe (Paris), Middle East (Bahrain) and South America (S?o Paulo).
To learn more, read the documentation. To get started with a tutorial, check out our blog post.
Dynamically update your running EMR cluster with reconfiguration for instance fleets
Published Date: 2025-02-17 22:20:00
Amazon EMR on EC2 now supports real-time update of application configurations for EMR instance fleets without requiring cluster termination or restart. With this feature, customers can now dynamically adjust application configurations, such as Spark’s executor memory, YARN’s resource allocation, and HDFS settings seamlessly, on a running cluster, minimizing interruptions to your workloads. This is particularly useful for adjusting resource allocation and fine-tune applications to match data processing and job performance requirements, while ensuring optimal resource utilization. Amazon EMR is a cloud big data platform for data processing, interactive analysis, and machine learning using open-source frameworks such as Apache Spark, Apache Flink, and Trino. Previously, you had to terminate and relaunch instance fleet clusters with new configurations. This process resulted in downtime, increased operational effort, and delayed workflow adjustments. With support for reconfiguration, EMR dynamically applies the updated configurations on cluster nodes on a rolling basis while ensuring cluster stability and resource availability. It provides notifications to customers via Amazon CloudWatch and EMR events. In the event of a failure or an incompatible update, EMR rolls back the changes to ensure your cluster remains operational. You can continue to run workloads on the cluster during the update process. You can leverage this feature on all EMR 5.21 and later releases using AWS CLI, or API. This capability is available in all AWS Regions, including the AWS GovCloud (US) Regions, where Amazon EMR on EC2 is available. To learn more, please refer to the documentation here.
AWS CDK releases L2 construct support for Amazon Data Firehose delivery streams
Published Date: 2025-02-17 18:00:00
AWS Cloud Development Kit (AWS CDK) now includes L2 construct support for Amazon Data Firehose delivery streams, enabling developers to define and deploy streaming data infrastructure as code.?This new capability allows you to programmatically configure delivery streams that automatically deliver real-time data to destinations like Amazon S3. With this addition to AWS CDK, you can define sophisticated streaming architectures using familiar programming languages like TypeScript, Python, Java, and .NET. The module simplifies the process of setting up fully-managed delivery streams that push data to your desired destinations on a regular cadence, making it easier to build data lakes, enable real-time analytics, and create data archival solutions. To learn more about using Amazon Data Firehose with AWS CDK, see the AWS CDK documentation and the Amazon Data Firehose Developer Guide. You can also explore sample applications and use cases on the AWS CDK Workshop website.
Amazon Aurora PostgreSQL zero-ETL integration with Amazon Redshift now available in 18 additional regions
Published Date: 2025-02-17 18:00:00
Amazon Aurora PostgreSQL-Compatible Edition zero-ETL integration with Amazon Redshift is now supported in 18 additional regions, enabling near real-time analytics and machine learning (ML) using Amazon Redshift. With this launch, Aurora PostgreSQL zero-ETL integration with Amazon Redshift is supported in all AWS commercial regions where Amazon Redshift is supported. Zero-ETL integration with Amazon Redshift enables near real-time analytics and machine learning (ML) using Amazon Redshift to analyze petabytes of transactional data from Aurora. Within seconds of transactional data being written into Amazon Aurora PostgreSQL-Compatible Edition, zero-ETL seamlessly makes the data available in Amazon Redshift, removing the need to build and manage complex data pipelines that perform extract, transform, and load (ETL) operations. Amazon Aurora PostgreSQL zero-ETL integration with Amazon Redshift is available for Aurora PostgreSQL version 16.4 and higher in US West (N. California), Africa (Cape Town), Asia Pacific (Hyderabad), Asia Pacific (Jakarta), Asia Pacific (Melbourne), Asia Pacific (Osaka), Asia Pacific (Seoul), Canada (Central), Canada West (Calgary), Europe (London), Europe (Milan), Europe (Paris), Europe (Spain), Europe (Zurich), Israel (Tel Aviv), Middle East (Bahrain), Middle East (UAE), South America (S?o Paulo). For all regions supported by the Aurora PostgreSQL zero-ETL integration, see the supported AWS regions. To learn more and get started with zero-ETL integration, visit Amazon Aurora zero-ETL integration with Amazon Redshift and the getting started guides for Aurora and Amazon Redshift. ?
AWS Price List API supports AWS PrivateLink
Published Date: 2025-02-17 18:00:00
AWS Price List API now supports AWS PrivateLink. With AWS PrivateLink, you can simplify private network connectivity between virtual private clouds (VPCs), the AWS Price List API, and your on-premises data centers by using interface VPC endpoints and private IP addresses. The AWS Price List API provides a catalog of the products and prices for AWS services that you can purchase on AWS. AWS PrivateLink is compatible with AWS Direct Connect and AWS Virtual Private Network (VPN) to facilitate private network connectivity, and helps you eliminate the need to use public IP addresses, configure firewall rules, or configure an internet gateway to access the AWS Price List API from your on-premises data centers. Using the AWS Price List API with AWS Private Link enables you to access AWS product and pricing data without the need to access the public internet. AWS PrivateLink for AWS Price List API is available in all commercial regions where the AWS Price List API is available (US East (N. Virginia), Asia Pacific (Mumbai), Europe (Frankfurt), China (Ningxia). There is an additional cost to use this feature. Please see AWS PrivateLink pricing for more details. You can get started with the feature by using the AWS Management Console, AWS API, AWS CLI, AWS SDK, or AWS CloudFormation. Learn more at Access AWS Billing and Cost Management using an interface endpoint (AWS PrivateLink). ?