Week 26 (24 Jun - 30 Jun)

Week 26 (24 Jun - 30 Jun)

Amazon GuardDuty EC2 Runtime Monitoring now supports Ubuntu and Debian OS

Published Date: 2024-06-28 21:35:00

The Amazon GuardDuty EC2 Runtime Monitoring eBPF security agent now supports Amazon Elastic Compute Cloud (Amazon EC2) workloads that use the Ubuntu (Ubuntu 20.04, Ubuntu 22.04) and Debian (Debian 11 and Debian 12) operating system. If you use GuardDuty EC2 Runtime Monitoring with automated agent management then GuardDuty will automatically upgrade the security agent for your Amazon EC2 workloads. If you are not using automated agent management, you are responsible for upgrading the agent manually. You can view the current agent version running in your Amazon EC2 instances in the EC2 runtime coverage page of the GuardDuty console. If you are not yet using GuardDuty EC2 Runtime Monitoring, you can enable the feature for a 30-day free trial with a few steps. GuardDuty Runtime Monitoring helps you identify and respond to potential threats, including instances or self-managed containers in your AWS environment associated with suspicious network activity, such as querying IP addresses associated with cryptocurrency-related activity, or connections to a Tor network as a Tor relay. Threats to compute workloads often involve remote code execution that leads to the download and execution of malware. GuardDuty Runtime Monitoring provides visibility into suspicious commands that involve malicious file downloads and execution across each step, providing earlier discovery of threats during initial compromise—before they become business-impacting events.

EvolutionaryScale’s ESM3, a frontier language model family for biology, now available on AWS

Published Date: 2024-06-28 21:25:00

EvolutionaryScale’s ESM3 1.4B open source language model is now generally available on AWS through Amazon SageMaker JumpStart and AWS HealthOmics, with the full family coming soon. Amazon SageMaker JumpStart is a ML hub with foundation models, built-in algorithms, and prebuilt ML solutions that can be deployed with just a few clicks. AWS HealthOmics is a purpose-built service that helps healthcare and life science organizations analyze biological data. EvolutionaryScale, a frontier AI research lab and Public Benefit Corporation dedicated to developing AI for biology’s most complex problems, has released the cutting-edge ESM3 family of models. ESM3 is a biological frontier model family capable of generating entirely new proteins that have never existed in nature. ESM3 can generate proteins based on sequence, structure, and/or functional constraints – a novel "programmable biology" approach. Trained on billions of protein sequences spanning 3.8 billion years of evolution, ESM3 is one of the largest and most advanced generative AI models ever applied to biology. EvolutionaryScale’s ESM3 1.4B open source model is available in Amazon SageMaker JumpStart initially in US East (Ohio) and in all available AWS HealthOmics regions, except Asia Pacific (Singapore). To learn more, read the blog and press release. To get started with ESM3, visit SageMaker JumpStart website and AWS HealthOmics GitHub repository.

Amazon EventBridge announces new console dashboard

Published Date: 2024-06-28 17:30:00

Amazon EventBridge announces a new console dashboard providing you with a centralized view of your EventBridge resources, metrics, and quotas. The dashboard leverages CloudWatch metrics, allowing you to monitor account level metrics such as PutEvents, Matched Events, and Invocations for Buses, Concurrency and Throttles for Pipes, and Invocations and Errors for ScheduledGroups. Additionally, the dashboard allows you to view your default and applied quotas and navigate to the Service Quotas page to request increases, enabling you to respond quickly to changes in usage. The Amazon EventBridge Event Bus is a serverless event router that enables you to create scalable event-driven applications by routing events between your own applications, SaaS applications, and AWS services. EventBridge Pipes provides a consistent, and cost-effective way to create point-to-point integrations between event producers and consumers. The EventBridge Scheduler makes it simple for developers to create, execute, and manage scheduled tasks at scale. The new console dashboard surfaces account level metrics, providing deeper insight into your event-driven applications and allowing you to quickly identify and resolve issues as they arise. You can use the dashboard to answer basic questions such as “How many Buses and Pipes have I configured in my account?”, “What was my PutEvent traffic pattern for the last 3 hours?” or “What is the concurrency of my Pipe?”. You can further analyze and customize these dashboards in CloudWatch.

Amazon EC2 High Memory instances now available in Asia Pacific (Hong Kong) Region

Published Date: 2024-06-28 17:30:00

Starting today, Amazon EC2 High Memory instances with 3TiB of memory are now available in Asia Pacific (Hong Kong) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options. Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory. For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog. ?

AWS ParallelCluster 3.10 with support for Amazon Linux 2023 and Terraform

Published Date: 2024-06-28 17:00:00

AWS ParallelCluster 3.10 is now generally available. Key features of this release include support for Amazon Linux 2023 and Terraform. With Terrafrom support, customers can automate deployment and management of clusters similar to how they use Terraform to automate other parts of their AWS infrastructure. Other important features in this release include:

  1. Support for connecting clusters to an external Slurm database daemon (Slurmdbd) to follow best practices of enabling Slurm accounting in a multi-cluster environment.
  2. A new allocation strategy configuration to allocate EC2 Spot instances from the lowest-priced, highest-capacity availability pools to minimize job interruptions and save costs.

For more details on the release, review the AWS ParallelCluster 3.10.0 release notes. AWS ParallelCluster is a fully-supported and maintained open-source cluster management tool that enables R&D customers and their IT administrators to operate high-performance computing (HPC) clusters on AWS. AWS ParallelCluster is designed to automatically and securely provision cloud resources into elastically-scaling HPC clusters capable of running scientific, engineering, and machine-learning (ML/AI) workloads at scale on AWS.

Amazon SageMaker Model Registry now supports cross-account machine learning (ML) model sharing

Published Date: 2024-06-28 17:00:00

Today, we're excited to announce that Amazon SageMaker Model Registry now integrates with AWS Resource Access Manager (AWS RAM), making it easier to securely share and discover machine learning (ML) models across your AWS accounts. Data scientists, ML engineers, and governance officers need access to ML models across different AWS accounts such as development, staging and production to make the relevant decisions. With this launch, customers can now seamlessly share and access ML models registered in SageMaker Model Registry between different AWS accounts. Customers can simply go to the AWS RAM console or CLI, specify the Amazon SageMaker Model Registry model that needs to be shared, and grant access to specific AWS accounts or to everyone in the organization. Authorized users can then instantly discover and use those shared models in their own AWS accounts . This streamlines the ML workflows, enables better visibility and governance, and accelerates the adoption of ML models across the organization.

Amazon EventBridge Pipes now supports AWS PrivateLink

Published Date: 2024-06-28 17:00:00

Amazon EventBridge Pipes now supports AWS PrivateLink, allowing you to access Pipes from within your Amazon Virtual Private Cloud (VPC) without traversing the public internet. With today’s launch, you can leverage EventBridge Pipes features from a private subnet without the need to deploy an internet gateway, configure firewall rules, or set up proxy servers. Amazon EventBridge lets you use events to connect application components, making it easier to build scalable event-driven applications. EventBridge Pipes provides a simple, consistent, and cost-effective way to create point-to-point integrations between event producers and consumers. Pipes enables you to send data from one of 7 different event sources to any of the 20+ targets supported by the EventBridge Event Bus, including HTTPS endpoints through EventBridge API Destinations and event buses themselves. Today’s release of AWS PrivateLink support further reduces the amount of integration code you need to write and infrastructure you need to maintain when building event-driven applications. AWS PrivateLink support for EventBridge Pipes is available in all AWS Regions where EventBridge Pipes is available. To get started, follow the directions provided in the AWS PrivateLink documentation. To learn more about Amazon EventBridge Pipes, visit the EventBridge documentation. ?

Amazon SageMaker now supports SageMaker Studio Personalization

Published Date: 2024-06-28 17:00:00

We are excited to announce that Amazon SageMaker now allows admins to personalize the SageMaker Studio experience for their end-users. Admins can choose to hide applications and ML Tools from SageMaker Studio based on the end user preferences. Starting today, admins can use the new personalization capability while setting up domains and user-profiles on SageMaker Console or using APIs, and tailor the SageMaker Studio interface. They can curate experiences by selectively showing or hiding specific ML tools, applications and IDEs for specific personas to align closely with how users interact with the platform. This improves SageMaker Studio usability and provides a more intuitive and user-friendly experience. Data scientists and ML engineers can now easily discover and select ML features required to complete their workflows, leading to a better developer productivity. You can get started by creating or editing a domain, or a user profile in SageMaker Console or by using SageMaker APIs. This feature is available in all Amazon Web Services regions where SageMaker Studio is currently available. To learn more, visit documentation. ?

Amazon Q in Connect now recommends step-by-step guides

Published Date: 2024-06-28 17:00:00

Amazon Q in Connect, a generative-AI powered assistant for contact center agents, now recommends step-by-step guides in real-time, which agents use to quickly take action to resolve customers' issues. Amazon Q in Connect uses the real-time conversation with a customer to detect the customer's intent and provides a guided workflow that leads an agent through each step needed to solve the issue, reducing handle time and increasing first contact resolution rates and customer satisfaction. For example, when a customer contacts a financial services company, Amazon Q in Connect analyzes the conversation and detects the customer wants to open a retirement plan. Amazon Q in Connect then provides the agent with a guide that enables the agent to collect the necessary information, deliver the required disclosures, and automatically open the account. To learn more about Amazon Q in Connect, please visit the website or see the help documentation.

Amazon WorkSpaces introduces support for Red Hat Enterprise Linux

Published Date: 2024-06-28 17:00:00

AWS today announced support for Red Hat Enterprise Linux on Amazon WorkSpaces Personal. This operating system includes built-in security features that help organizations to run virtual desktops securely, while increasing agility and reducing cost. With this launch, WorkSpaces Personal customers have the flexibility to choose from a wider range of operating systems including Red Hat Enterprise Linux, Ubuntu Desktop, Amazon Linux 2, and Microsoft Windows. With Red Hat Enterprise Linux on WorkSpaces Personal, IT organizations can enable developers to work in an environment that is consistent with their production environment, and provide power users like engineers and data scientists with on-demand access to Red Hat Enterprise Linux environments whenever necessary—quickly spinning up and tearing down instances and managing the entire fleet through the AWS Console, without the burden of capacity planning or license management. WorkSpaces Personal offers a wide range of high-performance, license-included, fully-managed virtual desktop bundles—enabling organizations to only pay for the resources they use. Red Hat Enterprise Linux on WorkSpaces Personal is available in all AWS Regions where WorkSpaces Personal is available, except for AWS China Regions. Depending on the WorkSpaces Personal running mode, you will be charged hourly or monthly for your virtual desktops. For more details on pricing, refer to Amazon WorkSpaces Pricing. To get started with Red Hat Enterprise Linux on WorkSpace Personal log on to AWS Management Console, navigate to the WorkSpaces service and follow the Amazon WorkSpaces administration guide. ?

Announcing Data Quality Definition Language (DQDL) enhancements for AWS Glue Data Quality

Published Date: 2024-06-28 17:00:00

Customers use AWS Glue Data Quality, a feature of AWS Glue, to measure and monitor quality of their data. They author data quality rules using DQDL to ensure their data is accurate . Customers need the ability to author rules for complex business scenarios that include filter conditions, exclusion conditions, validations for empty values, and composite rules . Previously customers authored SQL to perform these data quality validations in the CustomSQL rule type. Today, AWS Glue announces new set of new enhancements to DQDL that allows data engineers easily author complex data quality rules using native rule types. DQDL now supports

  • NOT operator allowing customers to exclude certain values in their rule.
  • New keywords such as NULL, EMPTY, and WHITESPACES_ONLY to author rules that capture missing values without complex regular expressions.
  • Composite rules for customers to author sophisticated business rules. They can now specify options to manage the evaluation order of these rules.
  • WHERE clause in DQDL to filter data before applying rules.

Refer to DQDL guide for more information. AWS Glue Data Quality is available in all commercial regions where AWS Glue is available. To learn more, visit the AWS Glue product page and our documentation.

Amazon SageMaker Canvas announces new capabilities for time series forecasting models

Published Date: 2024-06-28 17:00:00

Amazon SageMaker Canvas announces new capabilities to build, evaluate, and deploy time series forecasting models, providing greater flexibility and ease of use to build forecasting applications. Amazon SageMaker Canvas is a no-code workspace that empowers analysts and citizen data scientists to build, customize, and deploy machine learning (ML) models to generate accurate predictions. To build time series forecasting models, SageMaker Canvas uses up to six built-in algorithms to create a custom ensemble of models for each item in your time series, resulting in highly accurate models. Starting today, SageMaker Canvas provides visibility into these algorithms and the flexibility to choose any combination of these algorithms to build your time series forecasting model. Once the model is built, SageMaker Canvas provides a leaderboard with a ranked list of model candidates including a recommendation for the best model based on your dataset and the problem to be solved. You can review key performance metrics for each model on the leaderboard and select a model of your choice. The selected model can then be deployed into production on an Amazon SageMaker real-time inference endpoint for use in applications outside SageMaker Canvas. To access the algorithm selection, model leaderboard, and direct deployment to real-time endpoint capabilities for time series forecasting, log out and log back in to SageMaker Canvas. The new capabilities are now available in all AWS regions where SageMaker Canvas is supported. To learn more, refer to the?SageMaker Canvas product documentation.

AWS Elemental MediaTailor now supports CMAF for dynamic ad transcoding

Published Date: 2024-06-28 17:00:00

AWS Elemental MediaTailor now supports Common Media Application Format (CMAF) segments for personalized HLS streams and will automatically transcode ad creatives to match. Previously, if you wanted to serve CMAF ad segments, you had to create a custom transcode profile configuration. MediaTailor will now detect when the content source is CMAF or ISOBMFF in a DASH or HLS stream and dynamically transcode the ad creatives to match the program source with no additional user configuration required. There is no additional cost for CMAF ad transcoding. AWS Elemental MediaTailor is a channel assembly and personalized ad-insertion service for video providers to create linear over-the-top (OTT) channels using existing video content. The service then lets you monetize those channels—or other live streams—with personalized advertising across the broadest range of devices with a seamless viewer experience. MediaTailor functions independently or as part of AWS Media Services, a family of services that form the foundation of cloud-based workflows. Visit the AWS region table for a full list of AWS Regions where AWS Elemental MediaTailor is available. To learn more about MediaTailor, please visit the product page. ?

Amazon CodeCatalyst now allows conversion of source repositories to custom blueprints

Published Date: 2024-06-28 17:00:00

Today, AWS announces a new capability that enables customers to convert an existing source repository into a custom blueprint in Amazon CodeCatalyst. Custom blueprints give teams the ability to define and propagate best practices for application code, workflows, and infrastructure. However, many customers have already defined these best practices in one or more existing source repositories. Previously, they needed to create a custom blueprint, and manually copy relevant artifacts from their existing source repository into the blueprint project. Now customers have a one-click option to convert an existing repository to a custom blueprint. For more information, see Converting source repositories to custom blueprints. Teams can use these custom blueprints to create CodeCatalyst projects or add functionality to existing projects. As the blueprint gets updated with the latest best practices or new options, it can regenerate the relevant parts of your codebase in projects containing that blueprint. For more information, see the CodeCatalyst Blueprints webpage and blueprints documentation.

AWS CodeBuild build timeout limit increased to 36 hours

Published Date: 2024-06-28 17:00:00

AWS CodeBuild now enables customers to increase their build timeout up to 36 hours compared to the prior limit of 8 hours. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. This setting represents the maximum amount of time before CodeBuild stops a build request if it is not complete. With this launch, customers with workloads requiring longer timeouts, such as large automated test suites or embedded machine builds, can leverage CodeBuild. The increased timeout limit is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page. To learn more about CodeBuild configurations, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.

AWS Backup support for Amazon S3 is now available in AWS Canada West (Calgary) Region

Published Date: 2024-06-27 21:50:00

Today, we are announcing the availability of AWS Backup support for Amazon S3 in Canada West (Calgary) Region. AWS Backup is a policy-based, fully managed and cost-effective solution that enables you to centralize and automate data protection of Amazon S3 along with other AWS services (spanning compute, storage, and databases) and third-party applications. Together with AWS Organizations, AWS Backup enables you to centrally deploy policies to configure, manage, and govern your data protection activity. With this launch, AWS Backup support for Amazon S3 is available in all AWS commercial, AWS China, and AWS GovCloud (US) Regions where AWS Backup is available. For more information on regional availability and pricing, see AWS Backup pricing page. To learn more about AWS Backup support for Amazon S3, visit the product page and technical documentation. To get started, visit the AWS Backup console.

Amazon QuickSight simplifies building pixel-perfect reports with Repeating Sections

Published Date: 2024-06-27 21:00:00

Today, Amazon QuickSight announces the addition of Repeating Sections capability within Pixel-perfect reports. The new feature gives QuickSight Authors the ability to configure report sections to automatically repeat based on the values of one or more dimensions in their data. When defining a repeating section, QuickSight users can select which dimension(s) the section should repeat for, such as state, country, or product category. The section will then dynamically generate a copy for each unique value in the selected dimension(s). For example, a section could repeat once for each state so that separate charts and text are generated specifically for California, Texas, New York, and other states. Repeating sections make it easy to automatically generate customized views of data across different groups or categories with minimal effort.

Amazon DataZone introduces API-driven, OpenLineage-compatible data lineage visualization in preview

Published Date: 2024-06-27 20:20:00

Amazon DataZone introduces data lineage in preview, helping customers visualize lineage events from OpenLineage-enabled systems or through API and trace data movement from source to consumption. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls. Amazon DataZone's data lineage feature captures and visualizes the transformations of data assets and columns, providing a view into the data movement from source to consumption. Using Amazon DataZone's OpenLineage-compatible API, domain administrators and data producers can capture and store lineage events beyond what is available in Amazon DataZone, including transformations in Amazon S3, AWS Glue, and other services. Data consumers in Amazon DataZone can gain confidence in an asset's origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, Amazon DataZone versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset's or job's history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets.

Amazon Managed Service for Apache Flink now supports Apache Flink 1.19

Published Date: 2024-06-27 20:10:00

Amazon Managed Service for Apache Flink now supports Apache Flink 1.19. This version includes new capabilities in the SQL API such as state TTL configuration and session window support. Flink 1.19 also includes Python 3.11 support, trace reporters for job restarts and checkpointing, and more. You can use in-place version upgrades for Apache Flink to adopt the Apache Flink 1.19 runtime for a simple and faster upgrade to your existing application. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon S3, custom integrations, and more using built-in connectors. Create or update an Amazon Managed Service for Apache Flink application in the Amazon Managed Service for Apache Flink console.

Amazon IVS Real-Time Streaming now supports up to 25,000 viewers

Published Date: 2024-06-27 17:30:00

The Amazon IVS Real-Time Streaming capability subscriber limit can now be raised beyond the default of 10,000 per stage in an AWS Region. You can request an increase for up to 25,000 subscribers per stage. With this enhancement, you can now reach an audience that is more than double the previous size, all engaging in the same Real-Time Stream. The increased limit for subscribers per stage is supported in all AWS Regions where Amazon IVS is available. You can request a quota increase by using the Service Quotas console. To learn more about Amazon IVS Real-Time Streaming quotas, please refer to the service documentation. Amazon IVS is a managed live streaming solution that is designed to be quick and easy to set up, and ideal for creating interactive video experiences. Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.

AWS Blu Insights accelerates migrations with new AI capabilities

Published Date: 2024-06-27 17:00:00

We are excited to announce new capabilities for accelerating AWS Mainframe Modernization with machine learning and generative AI assistance. Using the latest generative AI models in Amazon Bedrock and AWS Machine Learning services like Amazon Translate, AWS Blu Insights makes it simple to automatically generate code and file descriptions, transform code from mainframe languages, and query projects using natural language. Customers can now automatically generate summaries of source code files and snips, making it much easier to understand legacy mainframe applications. If a codebase has comments in languages other than English, with a click on the console, customers can view a translation of the comments into the English language. Blu Insights also makes it much faster to find information within files. Now, customers can filter data in projects using natural language that Blu Insights automatically converts to specific Blu Age queries. Using GenAI, Blu Insights also speeds up common tasks by classifying codebase files that don’t have an extension, converting source files written in languages like Rexx and C, and creating previews of mainframe BMS screens. Finally, new project management features driven by GenAI simplify project management by taking natural language text like “schedule a meeting” and automating the creation of scheduled events to save time and improve collaboration. Customers can now take advantage of automatically generated Activity Summaries and Activity Audits, which includes the actions taken by AI in a Blu Age project for auditing and compliance purposes. To learn more, visit AWS Mainframe Modernization service and documentation pages.

Amazon EKS introduces cluster creation flexibility for networking add-ons

Published Date: 2024-06-27 17:00:00

Starting today, Amazon Elastic Kubernetes Service (EKS) provides the flexibility to create Kubernetes clusters without the default networking add-ons, enabling you to easily install open source or third party alternative add-ons or self-manage default networking add-ons using any Kubernetes lifecycle management tool. Every EKS cluster automatically comes with default networking add-ons including Amazon VPC CNI, CoreDNS, and kube-proxy providing critical functionality that enables pod and service operations for EKS clusters. EKS also allows you to bring open source or third party add-ons and tools that manage their lifecycle. With today’s launch, you can skip the installation of default networking add-ons when creating the cluster, making it easier to install alternative add-ons. This also simplifies self-managing default networking add-ons using any lifecycle management tool like Helm or Kustomize, without needing to first remove the Kubernetes manifests of the add-ons from the cluster.

Amazon ECR supports Open Container Initiative Image and Distribution specification version 1.1

Published Date: 2024-06-27 17:00:00

Today, Amazon Elastic Container Registry (ECR) announced that it supports Open Container Initiative (OCI) Image and Distribution specification version 1.1, which includes support for Reference Types, simplifying the storage, discovery, and retrieval of artifacts related to a container image. AWS Container Services customers can now easily store, discover, and retrieve artifacts such as image signatures and Software bill of materials (SBOMs) as defined by OCI 1.1 for a variety of supply chain security use cases such as image signing and vulnerability auditing. Through ECR’s support of Reference types, customers now have a simple user experience for distributing and managing artifacts related to these use cases, consistent with how they manage container images today. OCI Reference Types support in ECR allows customers to distribute artifacts in their repositories alongside their respective images. Artifacts for a specific image are discovered through their reference relationship, and can be pulled the same way images are pulled. In addition, ECR’s replication feature supports referrers, copying artifacts to destination regions and accounts so they are ready to use alongside replicated images. ECR Lifecycle Policies also supports referring artifacts by deleting references when a subject image is deleted as a result of a lifecycle policy rule expire action, making management of referring artifacts simple with no additional configuration. OCI 1.1 is now supported in ECR in all AWS commercial regions and the AWS GovCloud (US) Regions. OCI 1.1 is also supported in Amazon ECR Public registry. To learn more, please visit our documentation.

Announcing Amazon WorkSpaces Pools, a new feature of Amazon WorkSpaces

Published Date: 2024-06-27 17:00:00

Amazon Web Services (AWS) announces a new feature of Amazon WorkSpaces, called Amazon WorkSpaces Pools, that helps customers save costs by sharing a pool of virtual desktops across a group of users who get a fresh desktop every time they log in. This new feature provides customers the flexibility and choice to support a wide range of use cases, including training labs, contact centers, and other shared-environments. Some user settings like bookmarks and files stored in a central storage repository like Amazon S3 or Amazon FSx can be saved for improved personalization.

WorkSpaces Pools also simplifies management across a customer’s WorkSpaces environment by providing a single console and set of clients to manage the various desktop hardware configurations, storage, and applications for the user, including the ability to manage their existing Microsoft 365 Apps for enterprise. Customers use AWS Application AutoScaling to automatically scale a pool of virtual desktops based on real-time usage metrics or predefined schedules. WorkSpaces Pools offers pay-as-you-go hourly pricing, providing significant savings.

With the launch of WorkSpaces Pools, customers now have the option to choose between WorkSpaces Personal, and WorkSpaces Pools. Customer can even opt for a blend of both, with the ease of managing from a single AWS Management Console. WorkSpaces Pools is now available with the usual WorkSpaces bundles including Value, Standard, Performance, Power, and PowerPro. For Region availability details, see AWS Regions and Availability Zones for WorkSpaces Pools. Learn more here.

Updates and Expansion to the AWS Well-Architected Framework and Lens Catalog

Published Date: 2024-06-27 17:00:00

AWS is excited to announce updates to the Well-Architected Framework and Lens Catalog. This latest update brings a comprehensive expansion to customers with expanded guidance on architectural best practices, empowering them to build and maintain optimized, secure, and resilient workloads in the cloud. The Framework updates provide more recommendations for AWS services, observability, generative AI, and operating models. We also refreshed the lists of resources and overall Framework structure. This update reduces redundancies, enhances consistency, and empowers customers to more accurately identify and address risks. We also expanded the Lens Catalog in the Well-Architected Tool to include additional industry-specific best practices. The Lens Catalog now includes the new Financial Services Industry Lens and updates to the Mergers and Acquisitions Lens. Additionally, we made significant updates to the Change Enablement in the Cloud whitepaper. With these updates to lenses and guidance, customers can optimize, secure, and align their cloud architectures based on their unique requirements. By leveraging the updated Well-Architected Framework and Lens Catalog, customers can follow the most current and comprehensive architectural best practices to confidently design, deploy, and operate their workloads in the cloud. To learn more about the AWS Well-Architected Framework and Lens Catalog updates, visit the AWS Well-Architected Framework documentation and explore the updated lenses in the Well-Architected Tool. ?

PostgreSQL 17 Beta 2 is now available in Amazon RDS Database Preview Environment

Published Date: 2024-06-27 17:00:00

Amazon RDS for PostgreSQL 17 Beta 2 is now available in the Amazon RDS Database Preview Environment, allowing you to evaluate the pre-release of PostgreSQL 17 on Amazon RDS for PostgreSQL. You can deploy PostgreSQL 17 Beta 2 in the Amazon RDS Database Preview Environment that has the benefits of a fully managed database. PostgreSQL 17 includes updates to vacuuming that reduces memory usage, improves time to finish vacuuming, and shows progress of vacuuming indexes. With PostgreSQL 17, you no longer need to drop logical replication slots when performing a major version upgrade. PostgreSQL 17 continues to build on the SQL/JSON standard, adding support for JSON_TABLE features that can convert JSON to a standard PostgreSQL table. The MERGE command now supports the RETURNING clause, letting you further work with modified rows. PostgreSQL 17 also includes general improvements to query performance and adds more flexibility to partition management with the ability to SPLIT/MERGE partitions. Please refer to the PostgreSQL community announcement for more details. Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the Preview Environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment. Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region. ?

Amazon RDS Multi-AZ deployment with two readable standbys now supports snapshot export to S3

Published Date: 2024-06-27 17:00:00

Amazon Relational Database Service (Amazon RDS) Multi-AZ deployments with two readable standbys now supports export of snapshot data to an Amazon S3 bucket. Amazon RDS Multi-AZ deployments with two readable standbys is ideal when your workloads require lower write latency and more read capacity. In addition, this deployment option supports minor version upgrades and system maintenance updates with typically less than one second of downtime when using Amazon RDS Proxy or open source tools such as AWS Advanced JDBC Driver, PgBouncer, or ProxySQL. You can now export Amazon RDS Multi-AZ deployments with two readable standbys snapshot data to an Amazon S3 bucket. The export process runs in the background and doesn't affect the performance of your cluster. When you export a DB snapshot, Amazon RDS extracts data from the snapshot and stores it in an Amazon S3 bucket. The data is stored in an Apache Parquet format that is compressed and consistent. After the data is exported, you can analyze the exported data directly through tools like Amazon Athena or Amazon Redshift Spectrum. See the Amazon RDS User Guide for a full list of supported Regions and engine versions. Learn more about Amazon RDS Multi-AZ deployments in the AWS News Blog. Create or update fully managed Amazon RDS Multi-AZ databases with two readable standby instances in the Amazon RDS Management Console. ?

Amazon Managed Service for Apache Flink introduces two new APIs to query operations on Flink applications

Published Date: 2024-06-26 21:05:00

Amazon Managed Service for Apache Flink introduces the ListApplicationOperations and DescribeApplicationOperation APIs for visibility into operations that were performed on your application. These APIs provide details about when an operation was initiated, its current status, success or failure, if your operation triggered a rollback, and more so that you can take follow-up action. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.

Amazon Managed Service for Apache Flink now supports system-rollback

Published Date: 2024-06-26 21:00:00

Amazon Managed Service for Apache Flink introduces the system-rollback feature to automatically revert your application to the previous running application version during Flink job submission if there are code or configuration errors. You can now opt-in to this feature for improved application uptime. You may encounter errors such as insufficient permissions, incompatible savepoints, and other errors when you perform application updates, Flink version upgrades, or scaling actions. System-rollback identifies these errors during job submission and prevents a bad update to your application. This gives you higher confidence in rolling out changes to your application faster. Amazon Managed Service for Apache Flink makes it easier to transform and analyze streaming data in real time with Apache Flink. Apache Flink is an open source framework and engine for processing data streams. Amazon Managed Service for Apache Flink reduces the complexity of building and managing Apache Flink applications and integrates with Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams, Amazon OpenSearch Service, Amazon DynamoDB streams, Amazon Simple Storage Service (Amazon S3), custom integrations, and more using built-in connectors.

Amazon Route 53 Application Recovery Controller zonal autoshift available in AWS GovCloud (US) Regions

Published Date: 2024-06-26 20:15:00

Amazon Route 53 Application Recovery Controller (Route 53 ARC) zonal autoshift is now generally available in the AWS GovCloud (US-East and US-West) Regions. AWS customers and AWS Partners who operate in the AWS GovCloud (US) Regions can now use zonal autoshift, a feature you can enable to safely and automatically shift an application’s traffic away from an Availability Zone (AZ) when AWS identifies a potential failure affecting that AZ. For failures such as power and networking outages, zonal autoshift improves the availability of your application by shifting your application traffic away from an affected AZ to healthy AZs. To get started, you can enable zonal autoshift for Application Load Balancer and Network Load Balancer, with cross-zone configuration disabled, using the console, SDK or CLI, or an Amazon CloudFormation template. Once enabled, Amazon will automatically shift application traffic away from an affected AZ, and shift it back after the failure is resolved. Zonal autoshift includes practice runs, a feature that proactively tests if your application has sufficient capacity in each AZ to operate normally even after shifting away from an affected AZ. You configure practice runs to automatically apply zonal shifts, which regularly check if your application can tolerate losing capacity in an AZ.

Amazon OpenSearch Ingestion adds supports to ingest streaming data from Confluent Cloud

Published Date: 2024-06-26 19:30:00

Amazon OpenSearch Ingestion now allows you to seamlessly ingest streaming data from Confluent Cloud Kafka clusters into your Amazon OpenSearch Service managed clusters or Serverless collections without the need for any third-party data connectors. With this integration, you can now use Amazon OpenSearch Ingestion to perform near-real-time aggregations, sampling and anomaly detection on data ingested from Confluent Cloud, helping you to build efficient data pipelines to power your complex observability use cases.

Amazon OpenSearch Ingestion pipelines can consume data from one or more topics in a Confluent Kafka cluster and transform the data before writing it to Amazon OpenSearch Service or Amazon S3. While reading data from Confluent Kafka clusters via Amazon OpenSearch Ingestion, you can configure the number of consumers per topic and tune different fetch parameters for high and low priority data. You can also optionally use Confluent Schema Registry to specify your data schema to dynamically read data at ingest time. You can also check out this blog post by Confluent to learn more about this feature.

Amazon CloudWatch Logs now supports account level subscription filter in 4 additional regions

Published Date: 2024-06-26 17:00:00

Amazon CloudWatch Logs is excited to announce support for creating account-level subscription filters using the put-account-policy API in 4 additional regions. This new capability enables you to deliver real-time log events that are ingested into Amazon CloudWatch Logs to an Amazon Kinesis Data Stream, Amazon Kinesis Data Firehose, or AWS Lambda for custom processing, analysis, or delivery to other destinations using a single account level subscription filter. Customers often need to forward all or a subset of logs to AWS services such as Amazon OpenSearch for various analytical use cases or Amazon Kinesis Data Firehose for further streaming to other systems. Currently, customers have to set up a subscription filter for each log group. However, with account-level subscription filters, customers can egress logs ingested into multiple or all log groups by setting up a single subscription filter policy for the entire account. This saves time and reduces management overhead. The account-level subscription filter applies to both existing log groups and any future log groups that match the configuration. Each account can create one account-level subscription filter. CloudWatch Logs Account-level Subscription Filter is now available in the AWS GovCloud (US-East) and (US-West) Regions, Israel (Tel Aviv), Canada West (Calgary). To learn more, please refer to the documentation on CloudWatch Logs Account Level Subscription Filters. ?

AWS CloudShell now supports Amazon Virtual Private Cloud (VPC)

Published Date: 2024-06-26 17:00:00

Today, AWS announces the general availability of Amazon Virtual Private Cloud (VPC) support for AWS CloudShell. This allows you to create CloudShell environments in a VPC, which enables you to use CloudShell securely within the same subnet as other resources in your VPC without the need for additional network configuration. Prior to this release, there was no mechanism to use CloudShell for controlling the network flow to the internet. This release allows you to securely and conveniently launch CloudShell in your VPC and access the resources within it. AWS CloudShell is a browser-based shell that makes it easy to securely manage, explore, and interact with your AWS resources. CloudShell is pre-authenticated with your console credentials. Common development tools are pre-installed so no local installation or configuration is required. With CloudShell you can run scripts with the AWS Command Line Interface (AWS CLI), define infrastructure with the AWS Cloud Development Kit (AWS CDK), experiment with AWS service APIs using the AWS SDKs, or use a range of other tools to increase your productivity.

To learn more about VPC connectivity in AWS CloudShell see our documentation.

Amazon Athena Provisioned Capacity now available in South America (S?o Paulo) and Europe (Spain)

Published Date: 2024-06-26 17:00:00

Today, Amazon Athena made Provisioned Capacity available in the South America (S?o Paulo) and Europe (Spain) regions. Provisioned Capacity is a feature of Athena that allows you to run SQL queries on fully-managed, dedicated serverless resources for a fixed price and no long-term commitments. Using Provisioned Capacity, you can selectively assign processing capacity to queries and control workload performance characteristics such as query concurrency and cost. You can scale capacity at any time, and pay only for the amount of capacity you need and time it is active in your account. Athena is a serverless, interactive query service that makes it possible to analyze petabyte-scale data with ease and flexibility. Provisioned Capacity provides workload management capabilities that help you prioritize, isolate, and scale your interactive query workloads. For example, use Provisioned Capacity if you want to scale capacity to run many queries at the same time or to isolate important queries from others running in your account. To get started, use the Athena console, AWS SDK, or CLI to request capacity for your account and select the workgroups whose queries you want to use the capacity. Provisioned Capacity is also available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Singapore), Europe (Ireland), Europe (Stockholm). To learn more, see Managing query processing capacity in the Amazon Athena User Guide. To learn more about pricing, visit the Athena pricing page. ?

Amazon Linux announces availability of AL2023.5 with new versions of PHP and Microsoft .NET

Published Date: 2024-06-26 17:00:00

Today are announcing the availability of the latest quarterly update to AL2023 containing the latest version of PHP and .NET, along with IPA Client and mod-php. Customers can take advantage of newer versions of PHP and .NET to ensure their applications are secure and efficient. Additionally, AL2023.5 includes packages like mod-php and IPA client that can improve web server performance and simplify identity management integration, respectively, further streamlining development workflows and enhancing overall system efficiency. To learn more about other features and capabilities in AL2023.5, see release notes. Amazon Linux 2023 is generally available in all AWS Regions, including the AWS GovCloud (US) Regions and the China Regions. To learn more about Amazon Linux 2023, see the AWS documentation. ?

EventBridge Scheduler adds more universal targets including Amazon Bedrock

Published Date: 2024-06-26 17:00:00

EventBridge Scheduler adds additional universal targets with 650+ more AWS API actions bringing the total to 7000+, including Amazon Bedrock.

EventBridge Scheduler allows you to create and run millions of scheduled events and tasks across AWS services without provisioning or managing the underlying infrastructure. EventBridge Scheduler supports one time and recurring schedules that can be created using common scheduling expressions such as cron, rate, and specific time with support for time zones and daylight savings. Our support for additional targets allows you to automate more use cases such as scheduling your Bedrock model to run inference for text models, image models, and embedding models at a specific point in time.

Amazon Redshift Serverless with lower base capacity available in additional regions

Published Date: 2024-06-26 17:00:00

Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse base capacity configuration of 8 Redshift Processing Units (RPUs) in the AWS Europe (Stockholm) and US West (Northern California) regions. Amazon Redshift Serverless measures data warehouse capacity in RPUs, and you pay only for the duration of workloads you run in RPU-hours on a per-second basis. Previously, the minimum base capacity required to run Amazon Redshift Serverless was 32 RPUs. With the new lower base capacity minimum of 8 RPUs, you now have even more flexibility to a support diverse set of workloads of small to large complexity based on your price performance requirements. You can increment or decrement the RPU in units of 8 RPUs. Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. With the new lower capacity configuration, you can use Amazon Redshift Serverless for production environments, test and development environments at an optimal price point when a workload needs a small amount of compute. To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference. ?

AWS Control Tower introduces an API to discover landing zone operations

Published Date: 2024-06-26 17:00:00

AWS Control Tower customers can now programmatically retrieve a list of all landing zone operations that have completed in the past 90 days including create, update, reset, and delete. The output contains summary information like the operation identifier, operation type, and status to help identify initiated operations. Until today, customers could only retrieve landing zone operations if they requested it by operation identifier or examined all operations. API users on the same team could not view operations performed by others in the same landing zone, resulting in lost context and reduced visibility into all operations. Now customers can easily view, audit and troubleshoot operations for their entire landing zone to avoid duplicate operations and improve overall operational efficiency. To learn more about these APIs, review configurations for landing zone APIs and API References in the AWS Control Tower User Guide. The new APIs are available in AWS Regions where AWS Control Tower is available. For a list of AWS Regions where AWS Control Tower is available, see the AWS Region Table. ?

AI21 Labs' Jamba-Instruct model now available in Amazon Bedrock

Published Date: 2024-06-25 21:30:00

AI21 Labs’ Jamba-Instruct, a powerful instruction-following large language model, is now available in Amazon Bedrock. Fine-tuned for instruction following and built for reliable commercial use, Jamba-Instruct can engage in open-ended dialogue, understand context and subtext, and complete a wide variety of tasks based on natural language instructions. With its 256K context window, Jamba-Instruct has the capability to ingest the equivalent of a 800-page novel or an entire company's financial filings for a given fiscal year. This large context window allows Jamba-Instruct to answer questions and produce summaries that are grounded in the provided inputs, eliminating the need for manual segmentation of documents in order to fit smaller context windows. With its strong reasoning and analysis capabilities, Jamba-Instruct can break down complex problems, gather relevant information, and provide structured outputs. The model is ideal for common enterprise use cases such as enabling Q&A on call transcripts, summarizing key points from documents, building chatbots, and more. Whether you need assistance with coding, writing, research, analysis, creative tasks, or general task assistance, Jamba-Instruct is a powerful model that can streamline your workflow and accelerate time to production for your gen AI enterprise applications. ?

Amazon CodeCatalyst now supports GitLab.com source code repositories

Published Date: 2024-06-25 21:00:00

Amazon CodeCatalyst now supports the use of source code repositories hosted in GitLab.com in CodeCatalyst projects. This allows customers to use GitLab.com repositories with CodeCatalyst’s features such as its cloud IDE (Development Environments), Amazon Q feature development, and custom and public blueprints. Customers can also trigger CodeCatalyst workflows based on events in GitLab.com, view the status of CodeCatalyst workflows back in GitLab.com, and even block GitLab.com pull request merges based on the status of CodeCatalyst workflows. Customers want the flexibility to use source code repositories hosted in GitLab.com, without the need to migrate to CodeCatalyst to use it functionality. Migration is a long process and customers want to evaluate CodeCatalyst and its capabilities using their own code repositories before they decide to migrate. Support for popular source code providers such as GitLab.com is the top customer ask for CodeCatalyst. Now customers can use the capabilities of CodeCatalyst without the need for migration of source code from GitLab.com.

Amazon MSK supports in-place upgrades from M5, T3 instance types to Graviton3 based M7G

Published Date: 2024-06-25 17:00:00

You can now upgrade your Amazon Managed Streaming for Apache Kafka (Amazon MSK) provisioned clusters running on X-86 based M5 or T3 instances and replace them with AWS Graviton3-based M7G instances with a single click of a button. In-place upgrades allows you to seamlessly switch over your existing provisioned clusters to M7G instance type for better price performance, while continuing to serve reads and writes for your connecting client applications. Switching to AWS Graviton3 processor based M7G instances on Amazon MSK provisioned clusters allows you to achieve up to 24% compute cost savings and up to 29% higher write and read throughput over comparable MSK clusters running on M5 instances. Additionally, these instances lower energy consumption by up to 60% than comparable instances, making your Kafka clusters more environmentally sustainable. In-place upgrades to M7G instances are now available in all AWS regions where MSK supports M7G. Please refer to our blog for more information on the price/ performance improvements of M7g instances and the Amazon MSK pricing page for information on pricing. To get started, you can update your existing clusters to M7G brokers using the AWS Management Console, and read our developer guide for more information. ?

Amazon DocumentDB announces IAM database authentication

Published Date: 2024-06-25 17:00:00

Amazon DocumentDB (with MongoDB compatibility) now supports cluster authentication with AWS Identity and Access Management (IAM) users and roles ARNs. Users and applications connecting to an Amazon DocumentDB cluster to read, write, update, or delete data can now use an AWS IAM identity to authenticate connection requests. These users and applications can use the same AWS IAM user or role when connecting to different DocumentDB clusters and to other AWS services. Applications running on AWS EC2, AWS Lambda, AWS ECS, or AWS EKS do not need to manage passwords in application when authenticating to Amazon DocumentDB using an AWS IAM role. These applications get their connection credentials through environment variables of an AWS IAM role, thus making it a passwordless mechanism. New and existing DocumentDB clusters can use AWS IAM to authenticate cluster connections without modifying the cluster configuration. You can also choose both password-based authentication and authentication with AWS IAM ARN to authenticate different users and applications to a DocumentDB cluster. Amazon DocumentDB cluster authentication with AWS IAM ARNs is supported by drivers which are compatible with MongoDB 5.0+. Authentication with AWS IAM ARNs is available in Amazon DocumentDB instance-based 5.0 clusters across all supported regions. To learn more, please refer to the Amazon DocumentDB documentation, and see the Region Support for complete regional availability. To learn more about IAM, refer to the product detail page.

Amazon Redshift Serverless with lower base capacity available in the Asia Pacific (Mumbai) Region

Published Date: 2024-06-25 17:00:00

Amazon Redshift now allows you to get started with Amazon Redshift Serverless with a lower data warehouse base capacity configuration of 8 Redshift Processing Units (RPUs) in the AWS Asia Pacific (Mumbai) region. Amazon Redshift Serverless measures data warehouse capacity in RPUs, and you pay only for the duration of workloads you run in RPU-hours on a per-second basis. Previously, the minimum base capacity required to run Amazon Redshift Serverless was 32 RPUs. With the new lower base capacity minimum of 8 RPUs, you now have even more flexibility to a support diverse set of workloads of small to large complexity based on your price performance requirements. You can increment or decrement the RPU in units of 8 RPUs. Amazon Redshift Serverless allows you to run and scale analytics without having to provision and manage data warehouse clusters. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. With the new lower capacity configuration, you can use Amazon Redshift Serverless for production environments, test and development environments at an optimal price point when a workload needs a small amount of compute. To get started, see the Amazon Redshift Serverless feature page, user documentation, and API Reference. ?

Amazon Aurora now provides additional monitoring information during upgrades

Published Date: 2024-06-25 17:00:00

Amazon Aurora now provides additional granular monitoring information during upgrades for enhanced observability. Customers can use the additional granularity shared in Amazon Aurora Events to stay informed and better manage their database upgrades. Customers upgrade their database version, operating system, and/or other components containing security, compliance, and functional enhancements. When applying upgrades, Aurora will now emit additional messages in Aurora Events and indicate when the database cluster is online and when it is not. For database minor version and patch upgrades, customers can use the messages to get additional granular insights about the exact downtime incurred for their database including the number of connections preserved during the upgrade. To learn more about how to monitor your upgrade process, you can view the technical documentation. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. You can get started by launching a new Amazon Aurora DB instance directly from the AWS Console or the AWS CLI. To get started with Amazon Aurora, take a look at our getting started page.

Amazon EC2 C6a instances now available in additional regions

Published Date: 2024-06-25 17:00:00

Starting today, the general-purpose Amazon EC2 C6a instances are now available in Asia Pacific (Hong Kong) region. C6a instances are powered by third-generation AMD EPYC processors with a maximum frequency of 3.6 GHz. C6a instances deliver up to 15% better price performance than comparable C5a instances. C6a instances offer 10% lower cost than comparable x86-based EC2 instances. These instances are built on the AWS Nitro System, a combination of dedicated hardware and lightweight hypervisor that delivers practically all of the compute and memory resources of the host hardware to your instances for better overall performance and security.

AWS CodeBuild supports Arm-based workloads using AWS Graviton3

Published Date: 2024-06-25 17:00:00

AWS CodeBuild’s support for Arm-based workloads now run on AWS Graviton3 without any additional configuration. In February 2021, CodeBuild launched support for native Arm builds on the second generation of AWS Graviton processors. Support for this platform allows customers to build and test on Arm without the need to emulate or cross-compile. Now, CodeBuild customers targeting Arm benefit from the enhanced capabilities of AWS Graviton3 processors. The upgrade delivers up to 25% higher performance over Graviton2 processors. Graviton3 also uses up to 60% less energy for the same performance as comparable EC2 instances, enabling customers to reduce their carbon footprint in the cloud. CodeBuild’s support for Arm using Graviton3 is now available in: US East (N. Virginia), US East (Ohio), US West (Oregon), US West (N. California), Europe (Ireland), Europe (Frankfurt), Europe (London), Europe (Stockholm), Europe (Spain), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central). To learn more about CodeBuild’s support for Arm, please visit our documentation. To learn more about how to get started, visit the AWS CodeBuild product page. ?

Amazon ElastiCache supports M7g and R7g Graviton3-based nodes in additional AWS regions

Published Date: 2024-06-25 17:00:00

Amazon ElastiCache now supports Graviton3-based M7g and R7g node families. ElastiCache Graviton3 nodes deliver improved price-performance compared to Graviton2. As an example, when running ElastiCache for Redis on an R7g.4xlarge node, you can achieve up to 28% increased throughput (read and write operations per second) and up to 21% improved P99 latency, compared to running on R6g.4xlarge. In addition, these nodes deliver up to 25% higher networking bandwidth. The M7g and R7g nodes are now available for Amazon ElastiCache in the following AWS regions: US East (N. Virginia and Ohio), US West (Oregon and N. California), Canada (Central), South America (Sao Paolo), Europe (Ireland, Frankfurt, London, Stockholm, Spain and Paris (m7g only)), Asia Pacific (Tokyo, Sydney, Mumbai, Hyderabad, Seoul and Singapore) regions. For complete information on pricing and regional availability, please refer to the Amazon ElastiCache pricing page. To get started, create a new cluster or upgrade to Graviton3 using the AWS Management Console, and get more information. ?

Amazon Time Sync Service expands microsecond-accurate time to 27 EC2 instance types

Published Date: 2024-06-25 17:00:00

The Amazon Time Sync Service now supports clock synchronization within microseconds of UTC on 27 additional Amazon Elastic Compute Cloud (Amazon EC2) instance types in supported regions, including all C7gd, M7gd, and R7gd instances. Built on Amazon's proven network infrastructure and the AWS Nitro System, customers can now access local, GPS-disciplined reference clocks on additional EC2 instance types. These clocks can be used to more easily order application events, measure 1-way network latency, increase distributed application transaction speed, and incorporate in-region and cross-region scalability features while also simultaneously simplifying technical designs. Additionally, you can audit your clock accuracy from your instance to monitor the expected microsecond-range accuracy. Customers already using the Amazon Time Sync Service on these newly supported instance types will see improved clock accuracy automatically, without needing to adjust their AMI or NTP client settings. Customers can also use standard PTP clients and configure a PTP Hardware Clock (PHC) to get the best accuracy possible. Both NTP and PTP can be used without needing any updates to VPC configurations. Amazon Time Sync with microsecond-accurate time is available in US East (N. Virginia) and the Tokyo regions on all R7g as well as C7i, M7i, R7i, C7a, M7a, R7a, M7g, C7gd, R7gd, and M7gd instance types. We will be expanding support to additional AWS Regions. There is no additional charge for using this service. Instructions to configure, and more information on the Amazon Time Sync Service, are available in the EC2 User Guide.

Amazon RDS for MySQL announces Extended Support minor 5.7.44-RDS.20240529

Published Date: 2024-06-25 17:00:00

Amazon Relational Database Service (RDS) for MySQL announces Amazon RDS Extended Support minor version 5.7.44-RDS.20240529. We recommend that you upgrade to this version to fix known security vulnerabilities and bugs in prior versions of MySQL. Learn more about the bug fixes and patches in this version in the Amazon RDS User Guide. Amazon RDS Extended Support provides you more time, up to three years, to upgrade to a new major version to help you meet your business requirements. During Extended Support, Amazon RDS will provide critical security and bug fixes for your MySQL on Aurora and RDS after the community ends support for a major version. You can run your MySQL databases on Amazon RDS with Extended Support for up to three years beyond a major version’s end of standard support date. Learn more about Extended Support in the Amazon RDS User Guide and the Pricing FAQs. Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. See Amazon RDS for MySQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console. ?

Amazon Redshift Concurrency Scaling is now available in three additional regions

Published Date: 2024-06-24 19:00:00

Amazon Redshift Concurrency Scaling is now available in the AWS Europe (Zurich), Europe (Spain), and Middle East (UAE) regions. Amazon Redshift Concurrency Scaling elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries. Concurrency Scaling resources are added to your Redshift cluster transparently in seconds, as concurrency increases, to process queries without wait time. Amazon Redshift customers with an active Redshift cluster earn up to one hour of free Concurrency Scaling credits, which is sufficient for the concurrency needs of most customers. Concurrency scaling allows you to specify usage control providing customers with predictability in their month-to-month cost, even during periods of fluctuating analytical demand. To enable Concurrency Scaling, set the Concurrency Scaling Mode to Auto in your Amazon Web Services Management Console. You can allocate Concurrency Scaling usage to specific user groups and workloads, control the number of Concurrency Scaling clusters that can be used, and monitor Cloudwatch performance and usage metrics.

Knowledge Bases for Amazon Bedrock now offers observability logs

Published Date: 2024-06-24 17:00:00

Knowledge Bases for Amazon Bedrock is a fully managed Retrieval-Augmented Generation (RAG) capability that allows you to connect foundation models (FMs) to internal company data sources to deliver relevant and accurate responses. Knowledge Bases now supports observability, offering log delivery choice through CloudWatch, S3 buckets, and Firehose streams. This capability provides enhanced visibility and timely insights into the execution of knowledge ingestion steps. Previously, Knowledge Bases provided basic statistics regarding content ingestion. However, this new feature offers more insights on the ingestion process, indicating whether each document was successfully processed or encountered failures. Having comprehensive insights in a timely manner ensure customers can promptly determine when their documents are ready for use with the Retrieve and RetrieveAndGenerate API calls.

This capability is supported in the all AWS Regions where Knowledge Bases is available. To learn more about these features and how to get started, refer to the Knowledge Bases for Amazon Bedrock documentation and visit the Amazon Bedrock console.

Amazon OpenSearch Serverless now available in Canada (Central) region

Published Date: 2024-06-24 17:00:00

We are excited to announce the availability of Amazon OpenSearch Serverless in the Canada (Central) region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless automatically provisions and scales resources to provide consistently fast data ingestion rates and millisecond response times during changing usage patterns and application demand. With the support in the South America (Sao Paulo) region, OpenSearch Serverless is now available in 13 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), and Canada (Central). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation. ?

Amazon RDS for MySQL supports new minor version 8.0.37

Published Date: 2024-06-24 17:00:00

Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor version 8.0.37. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MySQL, and to benefit from the bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.37 in the Amazon RDS user guide. You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide.

AWS B2B Data Interchange announces automated 999 acknowledgements for healthcare transactions

Published Date: 2024-06-24 17:00:00

AWS B2B Data Interchange now automatically generates 999 functional acknowledgements to confirm receipt of individual X12 electronic data interchange (EDI) healthcare transactions and to report errors. This launch helps you maintain HIPAA compliance while automating delivery of 999 acknowledgements to trading partners that require them. This launch adds to AWS B2B Data Interchange’s existing support for automated TA1 acknowledgements. Each acknowledgement generated by AWS B2B Data Interchange is stored in Amazon S3, alongside your transformed EDI, and emits an Amazon EventBridge event. You can use these events to automatically send the acknowledgements created by AWS B2B Data Interchange to your trading partners via SFTP using AWS Transfer Family or any other EDI connectivity solution. 999 X231 acknowledgements are generated for all X12 version 5010 HIPAA transactions, while 999 acknowledgements are generated for all other healthcare related X12 transactions. Support for automated acknowledgements is available in all AWS Regions where AWS B2B Data Interchange is available and provided at no additional cost. To learn more about automated acknowledgements, visit the documentation. To get started with AWS B2B Data Interchange for building and running your event-driven EDI workflows, take the self-paced workshop or deploy the CloudFormation template. ?

Amazon RDS announces integration with AWS Secrets Manager in the AWS GovCloud (US) Regions

Published Date: 2024-06-24 17:00:00

Amazon RDS now supports integration with AWS Secrets Manager in the AWS GovCloud (US) Regions to streamline how you manage your master user password for your RDS database instances. With this feature, RDS fully manages the master user password and stores it in AWS Secrets Manager whenever your RDS database instances are created, modified, or restored. The new feature supports the entire lifecycle maintenance for your RDS master user password including regular and automatic password rotations; removing the need for you to manage rotations using custom Lambda functions. RDS integration with AWS Secrets Manager improves your database security by ensuring your RDS master user password is not visible in plaintext to administrators or engineers during your database creation workflow. Furthermore, you have flexibility in encrypting the secrets using your own managed key or by using a KMS key provided by AWS Secrets Manager. RDS and AWS Secrets Manager provide you the ease and security in managing your master user password for your database instances, relieving you from complex credential management activities such as setting up custom Lambda functions to manage password rotations. For more information on this feature on RDS and Aurora engines, versions, and region availability, please refer to the RDS and Aurora user guides. ?

要查看或添加评论,请登录

Ankur Patel的更多文章

  • 2025 - Week 8 (17 Feb - 23 Feb)

    2025 - Week 8 (17 Feb - 23 Feb)

    Certificate-Based Authentication is now available on Amazon AppStream 2.0 multi-session fleets Published Date:…

  • 2025 - Week 7 (10 Feb - 16 Feb)

    2025 - Week 7 (10 Feb - 16 Feb)

    Amazon SES now offers tiered pricing for Virtual Deliverability Manager Published Date: 2025-02-14 19:30:00 Today…

  • 2025 - Week 6 (3 Feb - 9 Feb)

    2025 - Week 6 (3 Feb - 9 Feb)

    AWS Step Functions expands data source and output options for Distributed Map Published Date: 2025-02-07 22:50:00 AWS…

  • 2025 - Week 5 (27 Jan - 2 Feb)

    2025 - Week 5 (27 Jan - 2 Feb)

    AWS Transfer Family web apps are now available in 20 additional Regions Published Date: 2025-01-31 21:25:00 AWS…

  • 2025 - Week 4 (20 Jan - 26 Jan)

    2025 - Week 4 (20 Jan - 26 Jan)

    AWS announces new edge location in the Kingdom of Saudi Arabia Published Date: 2025-01-24 22:40:00 Amazon Web Services…

  • 2025 - Week 3 (13 Jan - 19 Jan)

    2025 - Week 3 (13 Jan - 19 Jan)

    AWS CodeBuild now supports test splitting and parallelism Published Date: 2025-01-17 22:50:00 You can now split your…

  • 2025 - Week 2 (6 Jan - 12 Jan)

    2025 - Week 2 (6 Jan - 12 Jan)

    Amazon Connect Contact Lens launches agent performance evaluations for email contacts Published Date: 2025-01-10…

  • Kickstarting 2025 - Week 1 (30 Dec - 5 Jan)

    Kickstarting 2025 - Week 1 (30 Dec - 5 Jan)

    Amazon FSx for NetApp ONTAP is now available in the AWS Asia Pacific (Malaysia) Region Published Date: 2025-01-03…

  • Week 52 (23 Dec - 29 Dec)

    Week 52 (23 Dec - 29 Dec)

    Amazon Aurora now supports PostgreSQL 16.6, 15.

  • Week 51 (16 Dec - 22 Dec)

    Week 51 (16 Dec - 22 Dec)

    Amazon EKS expands catalog of upgrade insight checks Published Date: 2024-12-20 22:45:00 Today, Amazon Elastic…

社区洞察

其他会员也浏览了