Week 40 (30 Sep - 6 Oct)

Week 40 (30 Sep - 6 Oct)

AWS Security Hub launches 7 new security controls

Published Date: 2024-10-04 19:55:00

AWS Security Hub has released 7 new security controls, increasing the total number of controls offered to 430. Security Hub now supports controls for new resource types, such as Amazon Simple Storage Service (S3) Multi-Region Access Points and Amazon Managed Streaming for Apache Kafka (MSK) Connect. Security Hub also released new control for Amazon GuardDuty EKS Runtime Monitoring. For the full list of recently released controls and the AWS Regions in which they are available, visit the Security Hub user guide. To use the new controls, turn on the standard they belong to. Security Hub will then start evaluating your security posture and monitoring your resources for the relevant security controls. You can use central configuration to do so across all your organization accounts and linked Regions with a single action. If you are already using the relevant standards and have Security Hub configured to automatically enable new controls, these new controls will run without taking any additional action. To get started, consult the following list of resources:

Amazon Connect Contact Lens supports new read-only permissions for reports and dashboards

Published Date: 2024-10-04 17:00:00

Amazon Connect Contact Lens now allows users to save and publish reports and dashboards as read-only. By publishing a report as read-only, only the user who created the report or dashboard can edit the report, while still making it visible for others to view or create a copy. For example, a contact center manager can configure a custom read-only dashboard and share it with the supervisors on their team to ensure they monitor the same metrics, while still allowing the supervisors to customize and save their own versions for further analysis. This feature is available in all AWS regions where Amazon Connect is offered. To learn more about read only reports, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.

AWS CodePipeline introduces new general purpose compute action

Published Date: 2024-10-04 17:00:00

AWS CodePipeline introduces the Commands action that enables you to easily run shell commands as part of your pipeline execution. With the Commands action, you will have access to a secure compute environment backed by CodeBuild to run AWS CLI, third-party tools, or any shell commands. The Commands action runs CodeBuild managed on-demand EC2 compute, and uses an Amazon Linux 2023 standard 5.0 image. Previously, if you wanted to run AWS CLI commands, third-party CLI commands, or simply invoke an API, you had to create a CodeBuild project, configure the project with the appropriate commands, and add a CodeBuild action to your pipeline to run the project. Now, you can simply add the Commands action to your pipeline, and define one or more commands as part of the action configuration. Since Commands is like any other CodePipeline action, you can use the standard CodePipeline features of input / output artifacts and output variables. To learn more about using the Commands action in your pipeline, visit our documentation. For more information about AWS CodePipeline, visit our product page. The Commands action is available in all regions where AWS CodePipeline is supported. ?

Amazon Route 53 Resolver endpoints now support DNS-over-HTTPS (DoH) with Server Name Indication (SNI) validation

Published Date: 2024-10-04 17:00:00

Starting today, you can provide Server Name Indication (SNI) with Route 53 Resolver endpoints for DNS-over-HTTPS (DoH), allowing you to specify the target server hostname for DNS query requests from your outbound endpoints to DoH servers that require SNI for TLS validation. DoH on Amazon Route 53 Resolver endpoints allows you to encrypt DNS queries that pass through the endpoints and improve privacy by minimizing the visibility of the information exchanged through the queries. With this launch, you can now specify the hostname with your outbound endpoint configuration to perform TLS handshakes for your DNS requests from the outbound endpoints to the DoH server. Enabling SNI validation for your DoH Resolver endpoints also helps you meet regulatory and business compliance requirements, such as those described in the memorandum of the US O?ce of Management and Budget, where outbound DNS traffic must be be addressed to Cybersecurity and Infrastructure Security Agency (CISA) Protective DNS that require SNI hostname validation for a successful TLS handshake. Resolver endpoints support for DoH with SNI is available in all Regions where Route 53 is available, including the AWS GovCloud (US) Regions. Visit the AWS Region Table to see all AWS Regions where Amazon Route 53 is available. You can get started by using the AWS Console or Route 53 API. For more information, visit the Route 53 Resolver product detail page and service documentation. For details on pricing, visit the pricing page. ?

Amazon SageMaker JumpStart is now available in the AWS GovCloud (US-West and US-East) Regions

Published Date: 2024-10-04 17:00:00

Amazon SageMaker JumpStart is now available in the AWS GovCloud (US) Regions. Public sector customers can easily deploy and fine-tune open-weight models through the SageMaker Python SDK. Amazon SageMaker JumpStart is a machine learning (ML) hub that offers hundreds of pre-trained models and built-in algorithms to help you quickly get started with ML. Customers can discover hundreds of open-weight pre-trained models such as Llama and Mistral stored in the AWS infrastructure, fine-tune with their own data, and deploy for cost effective inferencing using SageMaker Python SDK. Amazon SageMaker JumpStart is now Generally Available in the AWS GovCloud (US-West and US-East) Regions. Please note that some models require instances not yet available in GovCloud regions and will be usable after instances become available. To learn more about using SageMaker JumpStart through SageMaker Python SDK, see the SageMaker Python SDK documentation. Available models can also be found in the documentation.

Amazon Connect can now generate forecast for workloads with as little as one contact

Published Date: 2024-10-04 17:00:00

Amazon Connect can now generate forecasts for smaller workloads, with as little as one contact, making it easier for contact center managers to predict demand. This eliminates the need for you to manually adjust historical data to meet minimum data requirements. By reducing minimum data requirements, you can now enable managers to generate forecasts for smaller volume workloads than were previously possible, making it easier to do capacity planning and staffing. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here. ?

AWS Application Composer is now AWS Infrastructure Composer

Published Date: 2024-10-04 17:00:00

AWS Application Composer is now called AWS Infrastructure Composer. The new name emphasizes our capabilities in building infrastructure architectures. Since launching at re:Invent ’22, customers have told us how Application Composer has helped accelerate their serverless application architecture design with Application Composer’s simple drag-and-drop interface. Since the initial release, we have expanded our support to any CloudFormation resource, empowering customers to build any required resource architecture. The new AWS Infrastructure Composer name reflects our focus to help customers build any infrastructure with CloudFormation. AWS Infrastructure Composer is available in all commercial regions and the AWS GovCloud (US) Regions.

Amazon EC2 now supports Optimize CPUs post instance launch

Published Date: 2024-10-04 17:00:00

Amazon EC2 now allows customers to modify an instance’s CPU options after launch. You can now modify the number of vCPUs and/or disable the hyperthreading of a stopped EC2 instance to save on vCPU-based licensing costs. In addition, an instance’s CPU options are now maintained when changing its instance type. The Optimize CPUs feature allows customers to disable hyperthreading and reduce the number of vCPUs on an instance, resulting in a high memory to vCPU ratio helping customers save the vCPU-based licensing costs. This is particularly beneficial to customers who Bring-Your-Own-license (BYOL) for commercial database workloads, like Microsoft SQL Server.

This feature is available in all commercial AWS Regions.

To get started, see CPU options in the Amazon EC2 User Guide. To learn more about the new API, visit the Amazon EC2 API Reference. ?

Amazon Connect now supports multi-day copy and paste of agent schedules

Published Date: 2024-10-04 17:00:00

Amazon Connect now supports copying of agent schedules across multiple days, making management of agent schedules more efficient. You can now copy multiple days shifts from one agent to another agent or to the same agent, up to 14 days at a time. For example, if a new agent joins the team mid-month, you can quickly provide them with a schedule by copying up to 14 days of shifts from an existing agent’s schedule. Similarly, if an agent has a flexible working arrangement for a few weeks, you can edit their schedule for the first week and then copy it over to remaining weeks. Multi-day copy of agent schedules improves manager productivity by reducing time spent on managing agent schedules. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here. ?

Amazon WorkSpaces now supports file transfer between WorkSpaces sessions and local devices

Published Date: 2024-10-04 17:00:00

Amazon WorkSpaces is launching support for transferring files between a WorkSpaces Personal session and a local computer. This helps customers to manage and share files seamlessly, increasing their productivity. This is supported on personal WorkSpaces that use the DCV streaming protocol when using the Windows, Linux client applications or web access. With this launch, users can streamline their workflows and have easier ways to organize, manage, edit, and share files across their devices and platforms. The files on the WorkSpaces will be saved in a persistent storage folder. Amazon WorkSpaces also offers robust security measures, and administrators can control whether users can upload or download files from WorkSpaces to protect the data security of your organization. This functionality is now available in all the AWS Regions where Amazon WorkSpaces Personal is available. There are no additional WorkSpaces costs for using the file transfer functionality. However, the files uploaded consume user volume that is attached to the WorkSpaces. Customers can increase the size of the user volumes attached to WorkSpaces at any time. Changing the volume size of a WorkSpace will effect the billing rate. See Amazon WorkSpaces pricing for more information. To get started on the WorkSpaces file transfer function, see Configure file transfer for DCV WorkSpaces. ?

AWS Partner Central now supports association of an AWS Marketplace private offer to a launched opportunity

Published Date: 2024-10-04 17:00:00

Today, AWS Partner Central has enhanced the APN Customer Engagements (ACE) Pipeline Manager by allowing AWS partners to link an AWS Marketplace private offer to a launched opportunity. This feature gives AWS partners improved visibility into their AWS Marketplace transactions. By linking AWS Marketplace private offers to opportunities, partners can track deals from their co-selling pipeline all the way to customer offers. Additionally, partners can view their agreement information, such as agreement ID and creation date, in ACE Pipeline Manager, connected to the original customer opportunity. Starting today, this feature is available globally for all AWS Partners who have linked their AWS Partner Central and AWS Marketplace accounts. To get started, log in to AWS Partner Central and review the ACE user guide.

AWS IoT Core removes TLS ALPN requirement and adds custom authorizer capabilities

Published Date: 2024-10-03 21:10:00

Today, AWS IoT Core announces three new capabilities for domain configurations. Devices no longer need to rely on Transport Layer Security (TLS) Application Layer Protocol Negotiation (ALPN) extension to determine authentication type and protocol. Furthermore, developers can add additional X.509 client certificates validation to custom authentication workflow. Previously, devices selected authentication type by connecting to a defined port and providing TLS ALPN with chosen protocol. The new capability to configure authentication type and protocol purely based on the TLS Server Name Indication (SNI) extension makes it simpler to connect devices to the cloud without requiring TLS ALPN. This enables developers to migrate existing device fleets to AWS IoT Core without firmware updates or Amazon-specific TLS ALPN strings. The authentication type and protocol combination will be assigned to an endpoint for all supported TCP ports of this custom domain. Building on the above-mentioned feature, AWS IoT Core added two additional authentication capabilities. Custom Authentication with X.509 Client Certificates allows customers to authenticate IoT devices using X.509 certificates and then add custom authentication logics as an additional layer of security check. Secondly, Custom Client Certificate Validation allows customers to validate X.509 client certificate based on a custom Lambda function. For example, developers can build custom certificate revocation checks, such as, Online Certificate Status Protocol and Certificate Revocation List, before allowing a client to connect. All three capabilities are available in all AWS regions where AWS IoT Core is present, except AWS GovCloud (US). Visit the developer guide to learn more about this feature.

AWS B2B Data Interchange announces support for generating outbound X12 EDI

Published Date: 2024-10-03 20:20:00

AWS B2B Data Interchange now supports outbound EDI transformation, enabling you to generate X12 EDI documents from JSON or XML data inputs. This new capability adds to B2B Data Interchange’s existing support for transforming inbound EDI documents and automatically generating EDI acknowledgements. With the ability to transform and generate X12 EDI documents up to 150 MB, you can now automate your bidirectional EDI workflows at scale on AWS. The introduction of outbound EDI transformation establishes B2B Data Interchange as a comprehensive EDI service for conducting end-to-end transactions with your business partners. For example, healthcare payers can now process claims with claim payments, suppliers can confirm purchase orders with invoices, and logistics providers can respond to shipment requests with status notifications. B2B Data Interchange monitors specified prefixes in Amazon S3 to automatically process inbound and outbound EDI. Each outbound EDI document generated emits an Amazon EventBridge event which can be used to automatically send the documents to your business partners using AWS Transfer Family’s SFTP and AS2 capabilities, or any other EDI connectivity solution. Support for generating outbound X12 EDI is available in all AWS Regions where AWS B2B Data Interchange is available. To get started with building and running bidirectional, event-driven EDI workflows on B2B Data Interchange, take the self-paced workshop or deploy the CloudFormation template. ?

AWS Compute Optimizer now supports 80 new Amazon EC2 instance types

Published Date: 2024-10-03 18:50:00

AWS Compute Optimizer now supports 80 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types. The newly supported instance types include the latest generation compute optimized instances (c7i-flex, c6id, c8g), memory optimized instances (r8g, x8g), storage optimized instances (i4i), and GPU-based instances (g5, g5g, g6, gr6, p4d, p4de, p5). This expands the total EC2 instance types supported by Compute Optimizer to 779. By including support for the latest instance types that have improved price to performance ratios, Compute Optimizer helps customers identify additional savings opportunities and performance improvement opportunities. The newly supported c8g, r8g, and x8g EC2 instance types include the new AWS Graviton4 processors that offer 50% more cores, 160% more memory bandwidth, and up to 60% better performance than AWS Graviton2 processors. The C7i-flex instances powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) offer 5% better price/performance compared to c7i instances. For more information about the AWS Regions where Compute Optimizer is available, see AWS Region table. For more information about Compute Optimizer, visit our product page and documentation. You can start using AWS Compute Optimizer through the AWS Management Console, AWS CLI, and AWS SDK. ?

AWS Cloud WAN and AWS Network Manager are now available in additional AWS Regions

Published Date: 2024-10-03 18:30:00

With this launch, AWS Cloud WAN and AWS Network Manager are now available in AWS Asia Pacific (Melbourne, Hyderabad), AWS Europe (Spain, Zurich), AWS Middle East (UAE) Region and AWS Canada West (Calgary) Regions. Additionally, AWS Cloud WAN is available in AWS Israel (Tel Aviv) Region. With AWS Cloud WAN, you can use a central dashboard and network policies to create a global network that spans multiple locations and networks, allowing you to configure and manage different networks using the same technology. You can use your network policies to specify which of your Amazon Virtual Private Clouds, AWS Transit Gateways, and on-premises locations you want to connect to by using an AWS Site-to-Site VPN, AWS Direct Connect, or third-party software-defined WAN (SD-WAN) products. The Cloud WAN central dashboard, powered by AWS Network Manager, generates a complete view of the network to help you monitor network health, security, and performance. AWS Network Manager reduces the operational complexity of managing global networks across AWS and on-premises locations. It provides a single global view of your private network. You can visualize your global network in a topology diagram and monitor your network using CloudWatch Metrics and events for network topology changes, routing updates, and connection status updates. To learn more about AWS Cloud WAN, see the product detail page and documentation. To learn more about AWS Network Manager, see the documentation. ?

Auto Scaling in AWS Glue interactive sessions is now generally available

Published Date: 2024-10-03 17:00:00

Auto Scaling in AWS Glue interactive sessions is now generally available. AWS Glue interactive sessions with Glue versions 3.0 or higher can now dynamically scale resources up and down based on the workload. With Auto Scaling, you no longer need to worry about over-provisioning resources for sessions, spend time optimizing the number of workers, or pay for idle workers. AWS Glue is a serverless data integration service that allows you to schedule and run data integration and extract, transform, and load (ETL) jobs or sessions without managing any computing infrastructure. AWS Glue allows users to configure the number of works and type of workers to utilize. AWS Glue Auto Scaling monitors each stage of the session run and turns workers off when they are idle or adds workers if additional parallel processing is possible. This simplifies the process of tuning resources and optimizing costs.

This feature is now available in all commercial AWS Regions, GovCloud (US-West), and China Regions where AWS Glue interactive sessions is available.

For more details, please refer to the Glue Auto Scaling blog post and visit our documentation.

Amazon Location Service is now available in AWS Europe (Spain) Region

Published Date: 2024-10-03 17:00:00

Today, we are announcing the availability of Amazon Location Service in the AWS Europe (Spain) Region. Amazon Location Service is a location-based service that helps developers easily and securely add maps, search places and geocodes, plan routes, and enable device tracking and geofencing capabilities into their applications. With Amazon Location Service, developers can start a new location project or migrate from existing mapping service workloads to benefit from cost reduction, privacy protection, and ease of integration with other AWS services. With this launch, Amazon Location Service is now available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Asia Pacific (Mumbai), Canada (Central), Europe (London), South America (S?o Paulo), AWS GovCloud (US-West), and AWS Europe (Spain). To learn more, please see the Amazon Location Service Getting Started page.

Amazon Aurora Serverless v2 now supports up to 256 ACUs

Published Date: 2024-10-03 17:00:00

Amazon Aurora Serverless v2 now supports database capacity of up to 256 Aurora Capacity Units (ACUs). Aurora Serverless v2 measures capacity in ACUs where each ACU is a combination of approximately 2 gibibytes (GiB) of memory, corresponding CPU, and networking. You specify the capacity range and the database scales within this range to support your application’s needs. With higher maximum capacity, customers can now use Aurora Serverless for even more demanding workloads. Instead of scaling up to 128 ACUs (256 GiB), the database can now scale up to 256 ACUs (512 GiB). You can get started with higher capacity with a new cluster or your existing cluster with just a few clicks in the AWS Management console. For a new cluster, select the desired capacity for the maximum capacity setting. For existing clusters, select modify and update the maximum capacity setting. For existing incompatible instances that don’t allow capacity higher than 128 ACUs, add a new reader with the higher capacity to the existing cluster and failover to it. 256 ACUs is supported for Aurora PostgreSQL 13.13+, 14.10+, 15.5+, 16.1+, and Aurora MySQL 3.06+. Aurora Serverless is an on-demand, automatic scaling configuration for Amazon Aurora. It adjusts capacity in fine-grained increments to provide just the right amount of database resources for an application’s needs. For pricing details and Region availability, visit Amazon Aurora Pricing. To learn more, read the documentation, and get started by creating an Aurora Serverless v2 database using only a few steps in the AWS Management Console. ?

Amazon Q Business is now HIPAA eligible

Published Date: 2024-10-03 17:00:00

Amazon Q Business is now HIPAA (Health Insurance Portability and Accountability Act) eligible. Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. With the Amazon Q Business HIPAA certification, healthcare and life sciences organizations such as health insurance companies and healthcare providers, can now use Amazon Q Business to run sensitive workloads regulated under the U.S. Health Insurance Portability and Accountability Act (HIPAA). AWS maintains a standards-based risk management program to ensure that the HIPAA-eligible services specifically support HIPAA administrative, technical, and physical safeguards. Amazon Q Business is HIPAA compliant in all of the AWS Regions where Amazon Q Business is supported. See the AWS Regional Services List for the most up-to-date availability information. To learn more about HIPAA eligible services, visit the webpage. To get started with Amazon Q Business, visit the product page to learn more. ?

Printer redirection and user selected regional settings now available on Amazon AppStream 2.0 multi-session fleets

Published Date: 2024-10-02 21:00:00

Amazon AppStream 2.0 is helping enhance the end-user experience by introducing support for local printer redirection and user-selected regional settings on multi-session fleets. While these features were already available on single-session fleets, this launch extends these functionalities to multi-session fleets, helping administrators to leverage the cost benefits of the multi-session model while providing an enhanced end-user experience. By combining these enhancements with the existing advantages of multi-session fleets, AppStream 2.0 offers a comprehensive solution that helps balance cost-efficiency and user satisfaction. With local printer redirection, AppStream 2.0 users can redirect print jobs from their streaming application to a printer that is connected to their local computer. No printer driver needs to be installed on the AppStream 2.0 streaming instance to enable users to print documents during their streaming sessions. Additionally, your users can now configure their streaming sessions to use regional settings. They can set the locale, and input method used by their applications in their streaming sessions. Each user's settings persist across all future sessions in the same AWS Region. These features are available at no additional cost in all the AWS Regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0. To enable these features for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after September18, 2024 or your image is using Managed AppStream 2.0 image updates released on or after September 20, 2024.

Amazon AppStream 2.0 enables automatic time zone redirection for enhanced user experience

Published Date: 2024-10-02 21:00:00

Amazon AppStream 2.0 now allows end users to enable automatic time zone redirection for application and desktop streaming sessions. With this new capability, AppStream 2.0 streaming sessions will automatically adjust to match the time zone setting of the end user's client device. While end users can still manually configure regional preferences like time zone, language and input method based on their location. Automatic time zone redirection eliminates the need to manually configure time zone. By automatically redirecting the time zone, AppStream 2.0 provides an improved localized experience for end users. The streaming applications and desktops will now display the user's local time zone out of the box, without any manual configuration required. This helps create a more intuitive experience for users across different global locations. The time zone redirection works independently of the AWS region where the AppStream 2.0 fleet is deployed. This feature is available to all the customers using web browser to connect to AppStream 2.0 at no additional cost in all the AWS Regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0. To enable this feature for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after September 18, 2024 or your image is using Managed AppStream 2.0 image updates released on or after September 20, 2024.

Amazon Timestream for InfluxDB now includes advanced configuration options

Published Date: 2024-10-02 17:00:00

Amazon Timestream for InfluxDB now supports additional configuration options, providing you with more control over how the engine behaves and communicates with its clients.With today’s launch, Timestream for InfluxDB also introduces a feature that allows you to monitor instance CPU, Memory, and Disk utilization metrics directly from the AWS Management Console. Timestream for InfluxDB offers the full feature set of the 2.7 open-source version of InfluxDB, the most popular open source time-series database engine, in a fully managed service with features like Multi-AZ high-availability and enhanced durability. You can now configure the port to access your InfluxDB instances, allowing for greater flexibility in your infrastructure setup. Additionally, over 20 new engine configuration parameters gives you precise control over your instance's behavior. To get started, navigate to the Amazon Timestream Console, and configure your instances according to your needs. Existing customers can also update their instances to take advantage of these new configuration options. Amazon Timestream for InfluxDB is available in the following AWS Regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Canada (Central), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Asia Pacific (Jakarta), Europe (Paris), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Stockholm), Europe (Spain), and Middle East (UAE). You can create a Amazon Timestream for InfluxDB Instance from the Amazon Timestream console, AWS Command line Interface (CLI), or SDK, and AWS CloudFormation. To learn more about Amazon Timestream for InfluxDB visit the product page, documentation, and pricing page.

Amazon Managed Service for Prometheus now supports Internet Protocol Version 6 (IPv6)

Published Date: 2024-10-02 17:00:00

Amazon Managed Service for Prometheus now offers customers the option to use Internet Protocol version 6 (IPv6) addresses for their new and existing workspaces. Customers moving to IPv6 can simplify their network stack by running and operating their Amazon Managed Service for Prometheus workspaces on a network that supports both IPv4 and IPv6. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale. Prometheus is a popular Cloud Native Computing Foundation open-source project for monitoring and alerting on metrics from compute environments such as Amazon Elastic Kubernetes Service. The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude so customers will no longer need to manage overlapping address spaces in their VPCs. Customers can now connect to Amazon Managed Service for Prometheus APIs with IPv6 connections. Customers can also continue to connect to Amazon Managed Service for Prometheus APIs via IPv4 connections if they do not utilize IPv6. To learn more on best practices for configuring IPv6 in your environment, visit the whitepaper on IPv6 in AWS. Support for IPv6 on Amazon Managed Service for Prometheus is available in all regions where the service is GA. To learn more about Amazon Managed Service for Prometheus, visit the user guide or product page.

Amazon Virtual Private Cloud (VPC) now supports BYOIP and BYOASN in all AWS Local Zones

Published Date: 2024-10-02 17:00:00

Starting today, Amazon VPC supports two key public IP address management features, Bring-Your-Own-IP (BYOIP) and Bring-Your-Own-ASN (BYOASN), in all AWS Local Zones. If your applications use trusted IP addresses and Autonomous System Numbers (ASNs) that your customers or partners have allowed in their networks, you can run these applications in AWS Local Zones without requiring your partners or customers to change their allow-lists. The reachability of many workloads, including host-managed VPNs, proxies, and telecommunication network functions, depends on an organization’s IP address and ASN. With BYOIP, you can now assign your public IPs to workloads in AWS Local Zones, and with BYOASN, you can advertise them using your own ASN. This ensures your workloads remain reachable by customers or partners that have allowlisted your IP addresses and ASN. The BYOIP and BYOASN features are available in all AWS Local Zones, and all AWS Regions except China (Beijing, operated by Sinnet) and China (Ningxia, operated by NWCD). For more information about this feature, review the EC2 BYOIP documentation and IPAM tutorials. ?

AWS Snowball Edge Storage Optimized 210TB device is available in three new regions

Published Date: 2024-10-02 17:00:00

AWS Snowball Edge Storage Optimized 210TB device is now available in three additional regions: Asia Pacific (Mumbai), South America (Sao Paulo), and Asia Pacific (Seoul). The AWS Snowball Edge Storage Optimized 210TB features storage capacity of 210TB per device and high performance NVMe storage, enabling customers to quickly complete large data migrations. For the majority of data migration workloads, customers should use AWS DataSync as a secure, online service that automates and accelerates moving data between on premises and AWS Storage services. When bandwidth is limited, or a connection is intermittent, customers can use AWS Snowball Edge Storage Optimized 210TB for offline data migration. The AWS Snowball Edge Storage Optimized 210TB device supports two pricing options for data migration: less than 100TB, and from 100TB to 210TB pricing. To learn more, visit the AWS Snowball Pricing, Snow product page and Snow Family documentation.

Amazon Bedrock now available in the Asia Pacific (Seoul) and US East (Ohio) Regions

Published Date: 2024-10-02 17:00:00

Beginning today, customers can use Amazon Bedrock in the Asia Pacific (Seoul) and US East (Ohio) region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. To get started, visit the Amazon Bedrock page and see the Amazon Bedrock documentation for more details.

New VMware Strategic Partner Incentive (SPI) for Managed Services in AWS Partner Central

Published Date: 2024-10-01 21:40:00

Today, Amazon Web Services, Inc. (AWS) announces a new VMware SPI for Managed Services as part of Migration Acceleration Program (MAP) in AWS Partner Central. Eligible AWS Partners who also provide manage services post migration, can now leverage the VMware SPI for Managed Services to accelerate VMware customer migration opportunities. This new VMware SPI for Managed Services is available through the enhanced MAP template in AWS Partner Central which provides better speed to market with fewer AWS approval stages. With this enhancement, the AWS Partner Funding Portal (APFP) automatically calculates the eligible VMware SPI for Managed Services improving overall partner productivity by eliminating manual steps. The VMware SPI for Managed Services is now available for all Partners in Services path and in Validated or higher stage including all AWS Migration and Modernization Competency Partners. To learn more, review the 2024 APFP user guide

Amazon Redshift launches RA3.large instances

Published Date: 2024-10-01 21:30:00

Amazon Redshift launches RA3.large, a new smaller size in the RA3 node type with 2 vCPU and 16 GiB memory. You can now benefit from RA3.large as it gives more flexibility in compute options to choose from based on your workload requirements. Amazon Redshift RA3.large offers all the innovation of Redshift Managed Storage (RMS), including scaling and paying for compute and storage independently, data sharing, write operations support for concurrency scaling, Zero-ETL, and Multi-AZ. Along with already available sizes in the RA3 node type, RA3.16xlarge, RA3.4xlarge and RA3.xlplus, now with the introduction of RA3.large, you have even more compute sizing options to choose from to address the diverse workload and price performance requirements. To get started with RA3.large, you can create a cluster with the AWS Management Console or the create cluster API. To upgrade cluster from your Redshift DC2 environment to an RA3 cluster, you can take a snapshot of your existing cluster and restore it to an RA3 cluster, or do a resize from your existing cluster to a new RA3 cluster. To learn more about the RA3 node type, see the cluster management guide and the ’Upgrading to RA3 node type’ documentation. You can find more information on pricing by visiting the Amazon Redshift pricing page. RA3.large is generally available in all commercial regions where the RA3 node type is available. For more details on regional availability, see the ’RA3 node type availability’ documentation.

AWS announces Reserved Nodes flexibility for Amazon ElastiCache

Published Date: 2024-10-01 19:55:00

Today we’re announcing enhancements to Amazon ElastiCache Reserved Nodes that make them flexible and easier to use, helping you get the most out of your reserved nodes discount. Reserved nodes provide you with a significant discount compared to on-demand node prices, enabling you to optimize costs based on your expected usage. Previously, you needed to purchase a reservation for a specified node type (e.g. cache.r7g.xlarge) and would only be eligible for a discount on the given type with no flexibility. With this feature, ElastiCache reserved nodes offer size flexibility within an instance family (or node family) and AWS region. This means that your existing discounted reserved node rate will be applied automatically to usage of all sizes in the same node family. For example, if you purchase a r7g.xlarge reserved node and need to scale to a larger node such as r7g.2xlarge, your reserved node discounted rate is automatically applied to 50% usage of the r7g.2xlarge node in the same AWS Region. The size flexibility capability will reduce the time that you need to spend managing your reserved nodes. With this feature, you can get the most out of your discount even if your capacity needs change. Amazon ElastiCache reserved node size flexibility is available in all AWS Regions, including the AWS GovCloud (US) Regions and China Regions. To learn more, visit Amazon ElastiCache, the ElastiCache user guides and our blog post. ?

AWS Chatbot adds support to centrally manage access to AWS accounts from Slack and Microsoft Teams with AWS Organizations

Published Date: 2024-10-01 17:00:00

AWS announces general availability of AWS Organizations support in AWS Chatbot. AWS customers can now centrally govern access to their accounts from Slack and Microsoft Teams with AWS Organizations. This launch introduces chatbot management policy type in AWS Organizations to control access to your organization's accounts from chat channels. Using Service Control Policies (SCPs), customers can also globally enforce permission limits on CLI commands originating from chat channels. With this launch, customers can use chatbot policies and multi-account management services in AWS Organizations to determine which permissions models, chat applications, and chat workspaces can be used to access their accounts. For example, you can restrict access to production accounts from chat channels in designated workspaces/teams. Customers can also use SCPs to specify guardrails on the CLI command tasks executed from chat channels. For example, you can specify deny all rds: delete-db-cluster CLI actions originating from chat channels. AWS Organizations support in AWS Chatbot is available at no additional cost in all AWS Regions where AWS Chatbot is offered. Visit the Securing your AWS organization in AWS Chatbot documentation and blog to learn more.

Amazon EMR Serverless introduces Job Run Concurrency and Queuing controls

Published Date: 2024-10-01 17:00:00

Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce job run admission control on Amazon EMR Serverless with support for job run concurrency and queuing controls. Job run concurrency and queuing enables you to configure the maximum number of concurrent job runs for an application and automatically queues all other submitted job runs. This prevents job run failures caused when API limits are exceeded due to a spike in job run submissions or when resources are exhausted either due to an account or application's maximum concurrent vCPUs limit or an underlying subnet's IP address limit being exceeded. Job run queuing also simplifies job run management by eliminating the need to build complex queuing management systems to retry failed jobs due to limit errors (e.g., maximum concurrent vCPUs, subnet IP address limits etc.). With this feature, jobs are automatically queued and processed as concurrency slots become available, ensuring efficient resource utilization and preventing job failures. Amazon EMR Serverless job run concurrency and queuing is available in all AWS Regions where AWS EMR Serverless is available, including the AWS GovCloud (US) Regions and excluding China regions. To learn more, visit Job concurrency and queuing in the EMR Serverless documentation. ?

Amazon S3 adds Service Quotas support for S3 general purpose buckets

Published Date: 2024-10-01 17:00:00

You can now manage your Amazon S3 general purpose bucket quotas in Service Quotas. Using Service Quotas, you can view the total number of buckets in an AWS account, compare that number to your bucket quota, and request a service quota increase. You can get started using the Amazon S3 page on the Service Quotas console, AWS SDK, or AWS CLI. Service Quotas support for S3 is available in the US East (N. Virginia) and China (Beijing) AWS Regions. To learn more about using Service Quotas with S3 buckets, visit the S3 User Guide. ?

NICE DCV renames to Amazon DCV and releases version 2024.0 with support for Ubuntu 24.04

Published Date: 2024-10-01 17:00:00

Amazon announces DCV version 2024.0. In this latest release, NICE DCV has been renamed to Amazon DCV. The new DCV version introduces several enhancements, including support for Ubuntu 24.04 and enabling the QUIC UDP protocol by default. Amazon DCV is a high-performance remote display protocol designed to help customers securely access remote desktop or application sessions, including 3D graphics applications hosted on servers with high-performance GPUs. Amazon DCV version 2024.0 introduces the following updates, features, and improvements:

  • Renames to Amazon DCV. NICE DCV is now renamed as Amazon DCV. Additionally, Amazon has consolidated the WorkSpaces Streaming Protocol (WSP), used in Amazon WorkSpaces, with Amazon DCV. The renaming does not affect customer workloads, and there is no change to folder paths and internal tooling names.
  • Supports Ubuntu 24.04, the latest LTS version of Ubuntu with the latest security patches and updates, providing improved stability and reliability. Additionally, the DCV client on Ubuntu 24.04 now natively supports Wayland, providing better performance through more efficient graphical rendering.
  • Enables the QUIC UDP protocol by default, allowing end users to receive an optimized streaming experience.
  • Adds the ability to blank the Linux host screen when a remote user is connected to the Linux server in a console session, preventing users physically present near the server from seeing the screen and interacting with the remote session using the input devices connected to the host.

For more information, please see the Amazon DCV 2024.0 release notes or visit the Amazon DCV webpage to get started with DCV.

Amazon Bedrock Knowledge Bases now provides option to stop ingestion jobs

Published Date: 2024-10-01 17:00:00

Today, Amazon Bedrock Knowledge Bases is announcing the general availability of the stop ingestion API. This new API offers you greater control over data ingestion workflows by allowing you to stop an ongoing ingestion job that you no longer want to continue. Earlier, you had to wait for the full completion of an ingestion job, even in cases where you no longer desired to ingest from the data source or needed to make other adjustments. With the introduction of the new "StopIngestionJob" API, you can now stop an in-progress ingestion job with a single API call. For example, you can use this feature to quickly stop an ingestion job you accidentally initiated, or if you want to change the documents in your data source. This enhanced flexibility enables you to rapidly respond to changing requirements and optimize your costs. This new capability is available across all AWS Regions where Amazon Bedrock Knowledge Bases is available. To learn more about stopping ingestion jobs and the other capabilities of Amazon Bedrock Knowledge Bases, please refer to the documentation.

Amazon Data Firehose delivers data streams into Apache Iceberg format tables in Amazon S3

Published Date: 2024-10-01 17:00:00

Amazon Data Firehose (Firehose) can now deliver data streams into Apache Iceberg tables in Amazon S3. Firehose enables customers to acquire, transform, and deliver data streams into Amazon S3, Amazon Redshift, OpenSearch, Splunk, Snowflake, and other destinations for analytics. With this new feature, Firehose integrates with Apache Iceberg, so customers can deliver data streams directly into Apache Iceberg tables in their Amazon S3 data lake. Firehose can acquire data streams from Kinesis Data Streams, Amazon MSK, or Direct PUT API, and is also integrated to acquire streams from AWS Services such as AWS WAF web ACL logs, Amazon CloudWatch Logs, Amazon VPC Flow Logs, AWS IOT, Amazon SNS, AWS API Gateway Access logs and many others listed here. Customers can stream data from any of these sources directly into Apache Iceberg tables in Amazon S3, and avoid multi-step processes. Firehose is serverless, so customers can simply setup a stream by configuring the source and destination properties, and pay based on bytes processed. The new feature also allows customers to route records in a data stream to different Apache Iceberg tables based on the content of the incoming record. To route records to different tables, customers can configure routing rules using JSON expressions. Additionally, customers can specify if the incoming record should apply a row-level update or delete operation in the destination Apache Iceberg table, and automate processing for data correction and right to forget scenarios. To get started, visit Amazon Data Firehose documentation, pricing, and console.

Amazon MSK APIs now supports AWS PrivateLink

Published Date: 2024-10-01 17:00:00

Amazon Managed Streaming for Apache Kafka (Amazon MSK) APIs now come with AWS PrivateLink support, allowing you to invoke Amazon MSK APIs from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet. By default, all communication between your Apache Kafka clients and your Amazon MSK provisioned clusters is private, and your data never traverses the internet. With this launch, clients can also invoke MSK APIs via a private endpoint. This allows client applications with strict security requirements to perform MSK specific actions, such as fetching bootstrap connection strings or describing cluster details, without needing to communicate over a public connection . AWS PrivateLink support for Amazon MSK is available in all AWS Regions where Amazon MSK is available. To get started, follow the directions provided in the AWS PrivateLink documentation. To learn more about Amazon MSK, visit the Amazon MSK documentation. ?

Amazon Connect launches the ability to initiate outbound SMS contacts

Published Date: 2024-10-01 17:00:00

Amazon Connect now supports the ability to initiate outbound SMS contacts, enabling you to help increase customer satisfaction by engaging your customers on their preferred communication channel. You can now deliver proactive SMS experiences for scenarios such as post-contact surveys, appointment reminders, and service updates, allowing customers to respond at their convenience. Additionally you can offer customers the option to switch to SMS while waiting in a call queue, eliminating their hold time. To get started, add the new Send message block to a contact flow or use the new StartOutboundChatContact API to initiate outbound SMS contacts. This feature is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), and Europe (London). To learn more and get started, please refer to the documentation for the Send message flow block and StartOutboundChatContact API. ?

AWS Incident Detection and Response now available in Japanese

Published Date: 2024-10-01 17:00:00

Starting today, AWS Incident Detection and Response supports incident engagement in Japanese language. AWS Incident Detection and Response offers AWS Enterprise Support customers proactive engagement and incident management for critical workloads. With AWS Incident Detection and Response, AWS Incident Management Engineers (IMEs) are available 24/7 to detect incidents and engage with you within five minutes of an alarm from your workloads, providing guidance for mitigation and recovery. This feature allows AWS Enterprise Support customers to interact with Japanese-speaking Incident Management Engineers (IMEs) who will provide proactive engagement and incident management for critical incidents. To use this service in Japanese, customers must select Japanese as their preferred language during workload onboarding. For more details, including information on supported regions and additional specifics about the AWS Incident Detection and Response service, please visit the product page. ?

AWS Announces AWS re:Post Agent, a generative AI-powered virtual assistant

Published Date: 2024-09-30 21:45:00

AWS re:Post launches re:Post Agent, a generative AI-powered assistant that's designed to enhance customer interactions by offering intelligent and near real-time responses on re:Post. re:Post Agent provides the first response to questions in the re:Post community. Cloud developers can now get general technical guidance faster to successfully build and operate their cloud workloads. With re:Post Agent, you have a generative AI companion augmented by the community that expands the available AWS knowledge. Community experts can earn points to build their reputation status by reviewing answers from re:Post Agent. Visit AWS re:Post to collaborate with re:Post Agent and experience the power of generative AI-driven technical guidance.

Amazon AppStream 2.0 increases application settings storage limit

Published Date: 2024-09-30 20:15:00

Amazon AppStream 2.0 has expanded the default size limit for application settings persistence from 1GB to 5GB. This increase allows end users to store more application data and settings with no manual intervention and without impacting the performance or session setup time. Application settings persistence allows users' customizations and configurations to persist across sessions. When enabled, AppStream 2.0 automatically saves changes to a Virtual Hard Disk (VHD) stored in an S3 bucket unique to your account and AWS Region. This helps in enhancing the user experience by enabling users to resume work where they left off. With expanded default storage size and performance improvements, AppStream 2.0 makes it easier than ever for end users to retain their application data, settings, and customizations across sessions. The VHD syncs efficiently even for multi-gigabyte files due to optimizations in data syncing and access times. This feature is available at no additional cost in all regions where Amazon AppStream 2.0 is available. AppStream 2.0 offers pay-as-you go pricing. To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0. To enable this feature for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after September 18, 2024 or your image is using Managed AppStream 2.0 image updates released on or after September 20, 2024. ?

Amazon EventBridge announces new event delivery latency metric for Event Buses

Published Date: 2024-09-30 19:15:00

Amazon EventBridge Event Bus now provides an end-to-end event delivery latency metric in Amazon CloudWatch that tracks the duration between event ingestion and successful delivery to the targets on your Event Bus. This new IngestionToInvocationSuccessLatency allows you to now detect and respond to event processing delays caused by under-performing, under-scaled, or unresponsive targets. Amazon EventBridge Event Bus is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. You can set up rules to determine where to send your events, allowing for applications to react to changes in your systems as they occur. With the new IngestionToInvocationSuccessLatency metric you can now better monitor and understand event delivery latency to your targets, increasing the observability of your event-driven architecture. Support for the new end-to-end latency metric for Event Buses is now available in all commercial AWS Regions. To learn more about the new IngestionToInvocationSuccessLatency metric for Amazon EventBridge Event Buses, please read our blog post and documentation.

Launch Amazon CloudWatch Internet Monitor from Amazon Network Load Balancer console

Published Date: 2024-09-30 18:42:00

By adding your Network Load Balancer (NLB) to a monitor, you can gain improved visibility into your application's internet performance and availability using Amazon CloudWatch Internet Monitor. You can now create or associate a monitor for an NLB directly when you create an NLB in the AWS Management console. You can create a monitor for the load balancer, or add the load balancer to an existing monitor, directly from the Integrations tab on the console. With a monitor, you can get detailed metrics about your application's internet traffic that goes through a load balancer, with the ability to drill down into specific locations and internet service providers (ISPs). You also get health event alerts for internet issues that affect your application customers, and can review specific recommendations for improving the internet performance and availability for your application. After you create a monitor, you can customize it at any time by visiting the Internet Monitor console in Amazon CloudWatch. To learn more about how you can use and customize a monitor, see the Internet Monitor user guide documentation.

Amazon Inspector enhances engine for Lambda standard scanning

Published Date: 2024-09-30 17:00:00

Today, Amazon Inspector announced an upgrade to the engine powering its Lambda standard scanning. This upgrade will provide you with a more comprehensive view of the vulnerabilities in the third-party dependencies used in your Lambda functions and associated Lambda layers in your environment. With the launch of this enhanced scanning engine, you will benefit from these capabilities without any disruption to your existing workflows. Existing customers can expect to see some findings closed as the new engine re-evaluates your existing resources to better assess risks, while also surfacing new vulnerabilities. Amazon Inspector is a vulnerability management service that continually scans AWS workloads including Amazon EC2 instances, container images, and AWS Lambda functions for software vulnerabilities, code vulnerabilities, and unintended network exposure across your entire AWS organization. This improved version of Lambda standard scanning is available in all commercial and AWS GovCloud (US) Regions where Amazon Inspector is available. To learn more and get started with continual vulnerability scanning of your workloads, visit:

AWS CloudShell extends most recent capabilities to all commercial Regions

Published Date: 2024-09-30 17:00:00

AWS CloudShell now supports Amazon Virtual Private Cloud (VPC) support, improved environment start times, and support for Docker environments in all commercial Regions where CloudShell is live. Previously, these features were only available in a limited set of CloudShell’s live commercial Regions. These features increase the productivity of CloudShell customers and enable a consistent experience across all CloudShell commercial Regions. CloudShell VPC support allows you to create CloudShell environments in a VPC, which enables you to use CloudShell securely within the same subnet as other resources in your VPC without the need for additional network configuration. Start times have been improved, enabling customers to begin using CloudShell more quickly. With the Docker integration, CloudShell users can initialize Docker containers on demand and connect to them to prototype or deploy Docker based resources with AWS CDK Toolkit. These features are now supported in all AWS Commercial Regions where AWS CloudShell is available today. For more information about the AWS Regions where AWS CloudShell is available, see the AWS Region table. Learn more about these expanded capabilities in the CloudShell Documentation, including specific entries on VPC Support and the Docker integration.

Amazon Bedrock Model Evaluation now available in the AWS GovCloud (US-West) Region

Published Date: 2024-09-30 17:00:00

Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Model evaluation provides built-in curated datasets or you can bring your own datasets. Amazon Bedrock’s interactive interface guides you through model evaluation. You simply choose automatic evaluation, select the task type and metrics, and upload your prompt dataset. Amazon Bedrock then runs evaluations and generates a report, so you can easily understand how the model performed against the metrics you selected, and choose the right one for your use case. Using this report in conjunction with the cost and latency metrics from the Amazon Bedrock, you can select the model with the required quality, cost, and latency tradeoff. Model Evaluation on Amazon Bedrock is now Generally Available in AWS GovCloud (US-West) in addition to many commercial regions. To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock developer experience web page. To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs.

Amazon SES adds HTTPS open tracking for custom domains

Published Date: 2024-09-30 17:00:00

Amazon Simple Email Service (SES) now supports HTTPS for tracking open and click events when using custom domains. Using HTTPS helps meet security compliance requirements and reduces the chances of email delivery issues with mailbox providers that reject non-secure links. The new feature provides the flexibility to configure HTTPS as mandatory for both open and click tracking, or make it optional based on the protocol of the links in your email. Previously, HTTPS was only available for click event tracking with custom domains. If you required HTTPS for tracking both open and clicks events, you were limited to the default tracking approach where the links in your emails were wrapped with an Amazon-provided domain that immediately redirected recipients to the intended destination. Now, you can secure the tracking of both open and click events while providing a trustworthy and branded experience for your recipients by using your own custom domain. This can help increase deliverability metrics and protect your sender reputation by isolating it from the reputation of other senders. You can enable HTTPS for open and click tracking with custom domains in all AWS Regions where Amazon SES is offered. To learn more, see the Amazon SES documentation for configuring custom domains for open and click tracking. ?

Amazon Redshift announces mTLS support for Amazon MSK

Published Date: 2024-09-30 17:00:00

Amazon Redshift streaming ingestion already supports Amazon IAM authentication and with this announcement, we are now extending authentication methods with the addition of mutual transport layer security (mTLS) authentication between Amazon Redshift provisioned cluster or serverless workgroup and Amazon Managed Streaming for Apache Kafka (MSK) cluster or serverless. mTLS is an industry standard for authentication that provides the means for a server to authenticate a client it's sending information to, and for the client to authenticate the server. The benefit of using mTLS is to provide a trusted authentication method that relies on each party (client & server) exchanging a certificate issued by mutually trusted certificate authorities. This is a common requirement for compliance reasons in a variety of applications in several industries, e.g., financial, retail, government and healthcare industries. mTLS authentication is available starting with Amazon Redshift patch 184 release in all AWS regions where Amazon Redshift and Amazon MSK are currently available. See AWS service availability by region for more information. To learn more about using mTLS authentication with Amazon Redshift streaming, please refer to the Amazon MSK and mTLS sub-sections of the Amazon Redshift streaming documentation. ?

Announcing sample-based partitioning for AWS HealthOmics variant stores

Published Date: 2024-09-30 17:00:00

We are excited to announce that AWS HealthOmics variant stores are now optimized to improve sample based queries saving time and query costs for customers. AWS HealthOmics helps customers accelerate scientific breakthroughs by providing a fully managed service designed to handle bioinformatics and drug discovery workflows and storage at any scale. With this release, any new variant stores customer create will be automatically partitioned by the sample. This feature automatically partitions data loaded into a variant store by the sample information. Because of this partitioning, any analysis that includes sample level filtering no longer needs to scan the full set of data leading to a lower query cost and faster results. Sample based queries are common when using clinical outcome or phenotypic information to perform filtering. Sample partitioning is now supported for all new variant stores created in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv). To get started using the variant store, see the AWS HealthOmics documentation. ?

Amazon Q in QuickSight now generates data stories that are personalized to users

Published Date: 2024-09-30 17:00:00

Amazon Q in QuickSight announces personalization in data stories. A capability of Amazon Q in QuickSight, data stories helps users generate visually compelling documents and presentations that provide insights, highlight key findings, and recommend actionable next steps. With the addition of personalization to data stories, the generated narratives are tailored to the user and leverage employee location and job role to provide commentary that is more specific to the user’s organization. Amazon Q in QuickSight brings the power of Generative Business Intelligence to customers, enabling them to leverage natural language capabilities of Amazon Q to quickly extract insights from data, make better business decisions, and accelerate the work of business users. Personalization is automatically enabled for data stories and uses your organization’s employee profile data, without any additional setup. Amazon Q in QuickSight sources employee profile information from AWS IAM Identity Center that is connected to your organization’s identity provider. Personalization in data stories is initially available in the US East (N. Virginia) and US West (Oregon) AWS Regions. For more information, see Amazon QuickSight User Guide. ?

Amazon Aurora supports PostgreSQL 16.4, 15.8, 14.13, 13.16, and 12.20

Published Date: 2024-09-30 17:00:00

Amazon Aurora PostgreSQL-Compatible Edition now supports PostgreSQL versions 16.4, 15.8, 14.13, 13.16, and 12.20. These releases contain product improvements and bug fixes made by the PostgreSQL community, along with Aurora-specific security and feature improvements. These releases also contain new Babelfish’s features and improvements. As a reminder, Amazon Aurora PostgreSQL 12 end of standard support is February 28, 2025. You can either upgrade to a newer major version or continue to run Amazon Aurora PostgreSQL 12 past the end of standard support date with RDS Extended Support. These releases are available in all commercial AWS regions and AWS GovCloud (US) Regions, except China regions. You can initiate a minor version upgrade by modifying your DB cluster. Please review the Aurora documentation to learn more. To learn which versions support each feature, head to our feature parity page. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page. ?

Amazon Aurora MySQL now supports RDS Data API

Published Date: 2024-09-30 17:00:00

Amazon Aurora MySQL-Compatible Edition now supports a redesigned RDS Data API for Aurora Serverless v2 and Aurora provisioned database instances. You can now access these Aurora clusters via a secure HTTP endpoint and run SQL statements without the use of database drivers and without managing connections. This follows the launch of Data API for Amazon Aurora PostgreSQL-Compatible Edition for Aurora Serverless v2 and Aurora provisioned database instances last year. Data API was originally only available for single instance Aurora Serverless v1 clusters with a 1,000 request per second (RPS) rate limit. Based on customer feedback, Data API has now been redesigned for increased scalability. Data API will not impose a rate limit on requests made to Aurora Serverless v2 and Aurora provisioned clusters. Data API eliminates the use of drivers and improves application scalability by automatically pooling and sharing database connections (connection pooling) rather than requiring customers to manage connections. Customers can call Data API via AWS SDK and CLI. Data API also enables access to Aurora databases via AWS AppSync GraphQL APIs. API commands supported in the redesigned Data API are backwards compatible with Data API for Aurora Serverless v1 for easy customer application migrations. Data API supports Aurora MySQL 3.07 and higher versions in 14 regions. Customers currently using Data API for Aurora Serverless v1 are encouraged to migrate to Aurora Serverless v2 to take advantage of the redesigned Data API. To learn more, read the documentation. ?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了