2025 - Week 4 (20 Jan - 26 Jan)
Ankur Patel
3x AWS? certified | AWS Community Builder | Cloud Enabler and Practitioner | Solutions Architect | FullStack | DevOps | DSML | 6x Sisense certified | Blogger | Photographer & Traveller
AWS announces new edge location in the Kingdom of Saudi Arabia
Published Date: 2025-01-24 22:40:00
Amazon Web Services (AWS) announces expansion in the Kingdom of Saudi Arabia by launching a new Amazon CloudFront edge location in Jeddah. The new AWS edge location brings the full suite of benefits provided by Amazon CloudFront, a secure, highly distributed, and scalable content delivery network (CDN) that delivers static and dynamic content, APIs, and live and on-demand video with low latency and high performance. All Amazon CloudFront edge locations are protected against infrastructure-level DDoS threats with AWS Shield Standard that uses always-on network flow monitoring and in-line mitigation to minimize application latency and downtime. You also have the ability to add additional layers of security for applications to protect them against common web exploits and bot attacks by enabling AWS Web Application Firewall (WAF). Traffic delivered from this edge location is included within the Middle East region pricing. To learn more about AWS edge locations, see CloudFront edge locations. ?
Amazon Bedrock now offers multimodal support for Cohere Embed 3 Multilingual and Embed 3 English
Published Date: 2025-01-24 18:46:00
Amazon Bedrock now offers multimodal support for Cohere Embed 3 Multilingual and Embed 3 English, foundation models that generate embeddings from both text and images. This powerful addition to Amazon Bedrock can enable enterprises to unlock significant value from their vast amounts of data, including visual content. With this new capability, businesses can build systems that accurately and quickly search important multimodal assets such as complex reports, product catalogs, and design files. According to Cohere, Embed 3 delivers exceptional performance on various retrieval tasks and is engineered to handle diverse data types. Supporting search functionality for both text and images, and in over 100 languages (Embed 3 Multilingual), it is well-suited for global applications. These models are designed to process and interpret varied datasets, effectively managing inconsistencies typical in real-world scenarios. This versatility makes Embed 3 particularly valuable for enterprises seeking to enhance their search and retrieval systems across different data formats. By leveraging this technology, businesses can develop more comprehensive search applications, and can lead to improved user experiences and increased efficiency across various use cases. Cohere Embed 3 with multimodal support is now available in Amazon Bedrock and is supported in 12 AWS Regions, for more information on supported Regions, visit the Amazon Bedrock Model Support by Regions guide. For more details about Cohere Embed 3 and its capabilities, visit the Cohere product page. To get started with Cohere Embed 3 in Amazon Bedrock, visit the Amazon Bedrock console.
Amazon EKS and Amazon EKS Distro now supports Kubernetes version 1.32
Published Date: 2025-01-24 18:45:00
Kubernetes version 1.32 introduced several new features and bug fixes, and AWS is excited to announce that you can now use Amazon Elastic Kubernetes Service (EKS) and Amazon EKS Distro to run Kubernetes version 1.32. Starting today, you can create new EKS clusters using version 1.32 and upgrade existing clusters to version 1.32 using the EKS console, the eksctl command line interface, or through an infrastructure-as-code tool. Kubernetes version 1.32 introduces several improvements including stable support for custom resource field selectors and auto removal of persistent volume claims created by stateful sets. This release removes v1beta3 API version of FlowSchema and PriorityLevelConfiguration. To learn more about the changes in Kubernetes version 1.32, see our documentation and the Kubernetes project release notes. EKS now supports Kubernetes version 1.32 in all the AWS Regions where EKS is available, including the AWS GovCloud (US) Regions. You can learn more about the Kubernetes versions available on EKS and instructions to update your cluster to version 1.32 by visiting EKS documentation. You can use EKS cluster insights to check if there any issues that can impact your Kubernetes cluster upgrades. EKS Distro builds of Kubernetes version 1.32 are available through ECR Public Gallery and GitHub. Learn more about the EKS version lifecycle policies in the documentation.
Amazon Q Business now supports insights from images uploaded in chat
Published Date: 2025-01-24 18:00:00
Amazon Q Business, the most capable generative AI-powered assistant for finding information, gaining insight, and taking action at work, now offers capabilities to answer questions and extract insights from images uploaded in the chat. This new feature allows users to upload images directly to the Amazon Q Business chat and ask questions related to the content of those images. Users can seamlessly interact with visual content, enabling them to use image files as a data source for a richer image analysis experience. For instance, a user can upload an invoice image and promptly ask Amazon Q Business to categorize the expenses. Similarly, a business user can share a technical architecture diagram to request an explanation of it or to ask other specific questions related to its components and design. The new visual analysis feature is available in all AWS Regions where Amazon Q Business is available. To learn more, visit the Amazon Q Business product page. To get started with this new feature, visit the AWS Chat Service console or refer to our documentation for integration guidelines.
Amazon Redshift Multi-AZ is generally available for RA3 clusters in 2 additional AWS regions
Published Date: 2025-01-24 18:00:00
Amazon Redshift is announcing the general availability of Multi-AZ deployments for RA3 clusters in the Asia Pacific (Thailand) and Mexico (Central) AWS regions. Redshift Multi-AZ deployments support running your data warehouse in multiple AWS Availability Zones (AZ) simultaneously and continue operating in unforeseen failure scenarios. A Multi-AZ deployment raises the Amazon Redshift Service Level Agreement (SLA) to 99.99% and delivers a highly available data warehouse for the most demanding mission-critical workloads. Enterprise customers with mission critical workloads require a data warehouse with fast failover times and simplified operations that minimizes impact to applications. Redshift Multi-AZ deployment helps meet these demands by reducing recovery time and automatically recovering in another AZ during an unlikely event such as an AZ failure. A Redshift Multi-AZ data warehouse also maximizes query processing throughput by operating in multiple AZs and using compute resources from both AZs to process read and write queries. Amazon Redshift Multi-AZ is now generally available for RA3 clusters through the Redshift Console, API and CLI. For all regions where Multi-AZ is available, see the supported AWS regions. To learn more about Amazon Redshift Multi-AZ, see the Amazon Redshift Reliability page and Amazon Redshift Multi-AZ documentation page. ?
Amazon Aurora PostgreSQL Limitless Database now supports PostgreSQL 16.6
Published Date: 2025-01-24 18:00:00
Amazon Aurora PostgreSQL Limitless Database is now available with PostgreSQL version 16.6 compatibility. This release contains product improvements and bug fixes made by the PostgreSQL community, along with Aurora Limitless-specific security and feature improvements such as support for GIN operator classes with B-tree behavior (btree_gin) and support for DISCARD. Aurora PostgreSQL Limitless Database makes it easy for you to scale your relational database workloads by providing a serverless endpoint that automatically distributes data and queries across multiple Amazon Aurora Serverless instances while maintaining the transactional consistency of a single database. Aurora PostgreSQL Limitless Database offers capabilities such as distributed query planning and transaction management, removing the need for you to create custom solutions or manage multiple databases to scale. As your workloads increase, Aurora PostgreSQL Limitless Database adds additional compute resources while staying within your specified budget, so there is no need to provision for peak, and compute automatically scales down when demand is low. Aurora PostgreSQL Limitless Database is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Paci?c (Hong Kong), Asia Paci?c (Singapore), Asia Paci?c (Sydney), Asia Paci?c (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm). For pricing details, visit Amazon Aurora pricing. To learn more, read the Aurora PostgreSQL Limitless Database documentation and get started by creating an Aurora PostgreSQL Limitless Database in only a few steps in the Amazon RDS console. ?
Amazon EC2 I7ie instances now available in AWS Europe (Frankfurt, London), and Asia Pacific (Tokyo) regions
Published Date: 2025-01-24 18:00:00
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) I7ie instances are available in AWS Europe (Frankfurt, London), and Asia Pacific (Tokyo) regions. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density (highest in the cloud) for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS). To learn more, visit the I7ie instances?page. ?
AWS Transfer Family supports custom directory locations to store AS2 files
Published Date: 2025-01-24 18:00:00
AWS Transfer Family now allows you to customize the directories for your Applicability Statement 2 (AS2) files, including the inbound AS2 messages, message disposition notifications (MDN), and other metadata files. This enables you to separate your AS2 messages from the MDN files and status files generated by the service, and automate downstream processing of the messages received from your trading partners.
AS2 is a business-to-business messaging protocol used to transfer Electronic Data Interchange (EDI) documents across various industries, including healthcare, retail, supply chain and logistics. You can now specify separate directory locations to store your inbound AS2 messages, the associated MDN files, and the JSON status files generated by the service. This option overrides the service-default directory structure for storing these file types, enabling easier automation of downstream processing for your AS2 messages using other AWS services. For example, you can directly store inbound AS2 messages in the input directory used for AWS B2B Data Interchange, facilitating automatic conversion of X12 EDI contents into common data representations such as JSON or XML. Support for custom directory locations for AS2 files is available in all AWS Regions where AWS Transfer Family is available. To learn more about AWS Transfer Family’s AS2 offering, visit the documentation and take the self-paced workshop. ?
Amazon EFS is now available in the AWS Mexico (Central) Region
Published Date: 2025-01-23 21:20:00
Customers can now create file systems using Amazon Elastic File System (Amazon EFS) in the AWS Mexico (Central) Region. Amazon EFS is designed to provide serverless, fully elastic file storage that lets you share file data without provisioning or managing storage capacity and performance. It is built to scale on demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files. Because Amazon EFS has a simple web services interface, you can create and configure file systems quickly and easily. The service is designed to manage file storage infrastructure for you, meaning that you can avoid the complexity of deploying, patching, and maintaining complex file system configurations. For more information, visit the Amazon EFS product page, and see the AWS Region Table for complete regional availability information.
AWS Elastic Beanstalk improves scaling and deployment speeds for Windows instance with EC2 Fast-launch
Published Date: 2025-01-23 19:22:00
With AWS Elastic Beanstalk you can easily deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Now, Elastic Beanstalk automatically launches Windows instances with EC2 Fast Launch enabled on currently supported Windows platform versions. This new functionality helps in timely provisioning of Windows instances at scale reducing downtime and improving operational costs during deployment, saving time for development and operational teams. Elastic Beanstalk support for EC2 Fast Launch for Windows Instances is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions. For more information about Windows Fast Launch support please read documentation. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.
AWS Marketplace launches automated version archiving for AMI and container products
Published Date: 2025-01-23 18:00:00
Today, AWS Marketplace announces the availability of automated archival of old, unused product versions that are no longer available publicly for subscription (restricted). This feature is available for Amazon Machine Image (AMI), AMI with CloudFormation templates, and container products. With this release, AWS Marketplace is streamlining the version management experience for AWS customers and sellers. With automated version archival, any product version that has already been restricted by a seller for longer than two years will be archived. Archived versions will no longer be available to launch from AWS Marketplace for new customers; however, existing users can continue to use them through launch templates and Amazon EC2 Auto Scaling groups by specifying the AMI ID. Any archived version that has not been used to launch any new instances in the previous 13 months will be deleted. Once an archived version is deleted, it is no longer available to launch for new or existing users. Now AWS customers see only the latest versions of the products in AWS Marketplace, reducing the risk of using outdated versions. For sellers, it simplifies product management by automatically removing unused older versions. This capability is enabled for all AMI and container products, and no additional action is needed from sellers. To learn more about this feature, see the AWS Marketplace Seller Guide. ?
AWS Elastic Beanstalk adds default support of EC2 Launch Template when creating new environments
Published Date: 2025-01-23 18:00:00
With AWS Elastic Beanstalk, you can easily deploy and manage applications in AWS without worrying about configuring the infrastructure that runs those applications. Now, AWS Elastic Beanstalk will support EC2 Launch Template in Elastic Beanstalk environments by default after the deprecation of EC2 Launch Configuration. With this feature, you will not need to manually set certain configuration options in any AWS accounts, as described in documentation, to direct AWS Elastic Beanstalk to use the EC2 Launch Template when creating new environments.
Elastic Beanstalk’s support for EC2 Launch Template is generally available in commercial regions where Elastic Beanstalk is available including the AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions.
For more information about with Launch Template support in Elastic Beanstalk please read our developer guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.
AWS Resource Groups now supports 172 more resource types
Published Date: 2025-01-23 18:00:00
Today, AWS Resource Groups is adding support for an additional 172 resource types for tag-based Resource Groups. Customers can now use Resource Groups to group and manage resources from services such as AWS Entity Resolution, Amazon Personalize, and Amazon Q Apps. AWS Resource Groups enables you to model, manage and automate tasks on large numbers of AWS resources by logically grouping your resources. You can create collections of resources such as applications, projects, or workloads, and manage them on dimensions such as cost, performance, and compliance in AWS services such as myApplications, AWS Systems Manager and Amazon CloudWatch. AWS Resource Groups expanded resource type coverage is available in all AWS Regions, including the AWS GovCloud (US) Regions. You can access AWS Resource Groups through the AWS Management Console, the AWS SDK APIs, and the AWS CLI. For more information about grouping resources, see the AWS Resource Groups user guide and the list of supported resource types. To get started, visit AWS Resource Groups console.
Luma AI's Ray2 visual AI model now available in Amazon Bedrock
Published Date: 2025-01-23 18:00:00
Luma AI's new video-generating AI foundation model (FM), Ray2, is now available in Amazon Bedrock. AWS is the first and only cloud provider to offer fully managed models from Luma AI, expanding creative possibilities for developers and businesses using AWS services.
Luma Ray2 is a large-scale video-generation model capable of creating realistic visuals with fluid, natural movement. With Luma Ray2 in Amazon Bedrock, you can generate production-ready video clips with seamless animations, ultrarealistic details, and logical event sequences with natural language prompts, removing the need for technical prompt engineering. Ray2 currently supports 5- and 9-second video generations with 540p and 720p resolution, making the model ideal for a variety of professional creative applications. Streamline the creative process from concept to execution by using Ray2 video generations for content creation, entertainment, advertising, and media use cases. Content creators can swiftly generate video clips for product promotion and storytelling. Product teams can rapidly visualize concepts and create video mockups for market testing. Production studios can create previsualizations, generate realistic backgrounds, and produce initial versions of special effects sequences.
Luma AI's Ray2 model is now available in the US West (Oregon) AWS Region. To learn more about Ray2 and how to use it in your projects, read the AWS News Blog, visit the Luma AI in Amazon Bedrock page, the Amazon Bedrock console, or check out the Amazon Bedrock documentation.
Amazon Bedrock Flows announces preview of multi-turn conversation support
Published Date: 2025-01-23 18:00:00
Amazon Bedrock Flows enables you to link foundation models (FMs), Amazon Bedrock Prompts, Amazon Bedrock Agents, Amazon Bedrock Knowledge Bases, Amazon Bedrock Guardrails and other AWS services together to build and scale pre-defined generative AI workflows. Today, we announce preview of multi-turn conversation support for agent nodes in Flows. This feature enables dynamic, back-and-forth conversations between users and flows, similar to a natural dialogue. Customers who use agents to execute steps within flows have indicated that sometimes the agents don’t have all the context required to successfully complete the action. With this preview launch, an agent node can intelligently pause the flow’s execution and request user- specific information. After the user sends the requested information, the flow seamlessly resumes the execution with the additional inputs. This feature enables a more interactive and context-aware experience, because the flow can adapt its behavior based on user responses. The preview of multi-turn conversation support in Flows is now available in all regions that Flows is available. To get started, see this blog and the AWS user guide.
Amazon ElastiCache now supports 1-click connectivity setup between EC2 and your cache
Published Date: 2025-01-23 18:00:00
Starting today, you can easily connect Amazon ElastiCache clusters to an Amazon Elastic Compute Cloud (Amazon EC2) instance directly from the AWS Management Console. You can also connect to your cache using AWS CloudShell to execute commands without additional setup. With a single click, you can establish secure connectivity between your cache and an EC2 instance, following AWS recommended best practices. ElastiCache automatically configures VPCs, security groups, and network settings, eliminating the need for manual tasks like setting up subnets and ingress/egress rules. This streamlines the process for new users and developers, enabling them to launch a cache instance and connect it to an application within minutes. You can also choose to connect to your cache from the Console using AWS CloudShell. Just click the “Connect to cache” button in the new “Connectivity and Security” tab. This will open a new AWS CloudShell session and connect to your cache using the valkey-cli tool. Once connected, you can execute common Valkey commands, including reading data (e.g. GET <key>) and writing data (e.g. SET <key> <value>). This lets you test cache functionality directly in the console, without needing to connect to it from an EC2 instance. Learn more about setting up connectivity to a compute resource from your ElastiCache cluster in the Amazon ElastiCache user guide.
Amazon CloudWatch allows alarming on data up to 7 days old
Published Date: 2025-01-23 18:00:00
Amazon CloudWatch now lets customers evaluate metrics data for an extended duration of up to 7 days, a 7x increase from the previous limit of 24?hours. This enhancement empowers customers to monitor the health of longer-running or infrequent processes, such as daily data loading jobs or day-over-day performance trends, offering deeper insights into their resources and applications. Customers now have the flexibility to leverage CloudWatch alarms for use cases like near-real-time metric tracking as well as for monitoring patterns that span multiple days. Amazon CloudWatch alarming on multi-day data is now available in all AWS Regions including the AWS GovCloud (US) Regions. To alarm on multiple days of data, create or update an alarm using the CloudWatch console or Command Line Interface (CLI), specify a period of at least 3,600 (1 hour) and the number of datapoints that you want to compare against the threshold. For more information, visit the CloudWatch alarms documentation section.
领英推荐
Amazon CloudWatch Observability add-on launches one step onboarding for EKS workloads
Published Date: 2025-01-22 20:50:00
You can now enable Amazon CloudWatch Observability add-on on your EKS cluster with 1-click when provisioning. With 1-click enablement, CloudWatch Observability add-on turns on CloudWatch Container Insights and Application Signals together, enabling you to the understand the health and performance of your applications out of the box. CloudWatch Observability add-on integrates with EKS Pod Identity such that you can simply create a recommended IAM role for the add-on and re-use it across your clusters when creating them, saving you time and effort. Previously, you need to create your clusters first, wait for their status to become active and install the CloudWatch Add-on next while managing the required IAM permissions separately. With this launch, you can now install the Amazon CloudWatch Observability add-on when creating your clusters and launch them observability enabled, making observability telemetry available in CloudWatch out of the box. You can then use curated dashboards from CloudWatch Application Signals and CloudWatch Container Insights to take proactive actions in reducing application disruptions by isolating anomalies and troubleshooting faster. CloudWatch Observability add-on is now available on Amazon EKS in all commercial AWS Regions, including the AWS GovCloud (US) Regions. You can install, configure, and update the add-on with just a few clicks in the Amazon EKS console, APIs, AWS Command Line Interface (AWS CLI), and AWS CloudFormation. To get started, follow the user guide. ?
AWS Marketplace introduces 8 decimal place precision for usage pricing
Published Date: 2025-01-22 19:00:00
?AWS Marketplace sellers can now price usage rates with up to 8 decimal places. This enhancement improves the precision of pay-as-you-go pricing where per-unit costs can be fractions of a cent ($0.00000001), enabling more accurate billing calculations for customers. Previously, AWS Marketplace sellers were limited to using only 3 decimal places for usage pricing, restricting flexibility in pricing pay-as-you-go products. This increased precision gives sellers more control over pricing strategies. Sellers can now set more granular per-unit costs (for example, per megabyte or gigabyte), allowing for more accurate billing. This also benefits sellers operating in different currencies, allowing them to set more accurate equivalent US dollar (USD) prices in AWS Marketplace. Additionally, sellers can maintain specific profit margins with greater precision. For example, resellers can set a retail price of $0.0033 to maintain an exact 10% margin on a $0.003 wholesale price. These improvements offer sellers greater control and precision in pricing, leading to more granular rates for customers and improved profitability for sellers, especially in markets where small price differences matter. This feature is available for software as a service (SaaS), server, and AWS Data Exchange products in all AWS Regions where AWS Marketplace is available. To learn more, access AWS Marketplace Product Pricing documentation and AWS Marketplace API documentation. Start using this feature through the AWS Marketplace Management Portal or the AWS Marketplace Catalog API.
CloudWatch provides execution plan capture for Aurora PostgreSQL
Published Date: 2025-01-22 18:00:00
Amazon CloudWatch Database Insights now collects the query execution plans of top SQL queries running on Aurora PostgreSQL instances, and stores them over time. This feature helps you identify if a change in the query execution plan is the cause of performance degradation or a stalled query. Execution plan capture for Aurora PostgreSQL is available exclusively in the Advanced mode of CloudWatch Database Insights. A query execution plan is a sequence of steps that database engines use to retrieve or modify data in a relational database management system (RDBMS). The RDBMS query optimizers may not always choose the most optimal execution plan from a set of alternative ways to execute a given query. Hence, database users sometimes need to manually examine and tune the plans to improve performance. This feature allows you to visualize multiple plans of a SQL query and compare them. It can help you determine if a change in performance of a SQL query is due to a different query execution plan within minutes. You can get started with this feature by enabling Database Insights Advanced mode on your Aurora PostgreSQL clusters using the RDS service console, AWS APIs, or the AWS SDK. CloudWatch Database Insights delivers database health monitoring aggregated at the fleet level, as well as instance-level dashboards for detailed database and SQL query analysis. CloudWatch Database Insights is available in all public AWS Regions and offers vCPU-based pricing – see the pricing page for details. For further information, visit the Database Insights documentation.
AWS Client VPN announces support for concurrent VPN connections
Published Date: 2025-01-22 18:00:00
Today, AWS announces the general availability of concurrent VPN connections for AWS Client VPN. This feature allows you to securely connect to multiple Client VPN connections simultaneously, enabling access to your resources across different work environments. AWS Client VPN allows your users to securely connect to your network remotely from any location. Previously, you could only connect to one VPN profile at a time. This limited your access to only one network. To access another network, you were required to disconnect and reconnect to a different VPN profile. With this launch, you can connect to multiple VPN profiles simultaneously without switching. For example, software developers using AWS client for VPN can now connect to development, test, and production environment concurrently. This feature allows seamless parallel connections to all required environments, significantly improving productivity for end users. This feature is available only with AWS-supplied Client VPN client version 5.0+. You can download this version following the steps here. This feature and the required client version are available at no additional cost in all AWS regions where AWS Client VPN is generally available. To learn more about Client VPN:
AWS IoT SiteWise now supports null and NaN data types
Published Date: 2025-01-22 18:00:00
Today, Amazon Web Services, Inc. announces that AWS IoT SiteWise now supports NULL and NaN (Not a Number) data of bad or uncertain data quality from industrial data sources. AWS IoT SiteWise is a managed service that makes it easy to collect, store, organize, and analyze data from industrial equipment at scale. This new feature enhances the services capability to handle a wider range of data, improving its utility for industrial applications. With this update, AWS IoT SiteWise now collects, stores, and retrieves real-time or historical NULL values for all supported data types. It also supports NaN values of double data type. Capturing NULL and NaN data is critical for various industrial use cases, including compliance reporting, observability, and downstream analytics, while also simplifying data set conditioning and cleaning for advanced analytics and machine learning applications. This new feature is available in all AWS Regions where AWS IoT SiteWise is available. To learn more about data ingestion and processing data quality on AWS IoT SiteWise, see AWS IoT SiteWise Documentation.
Amazon Connect now provides daily headcount projections in capacity plan downloads
Published Date: 2025-01-22 18:00:00
Amazon Connect now provides daily headcount projections in capacity plan downloads, enhancing your ability to review staffing requirements with greater precision. While capacity plans already provided weekly and monthly projections, this launch allows you to access day-by-day headcount requirements for up to 64 weeks into the future. This granular view simplifies key staffing and hiring decisions, such as how many workers to hire while accounting for seasonality and applying different shrinkage assumptions at a day level. This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here. ?
Amazon DynamoDB introduces warm throughput for tables and indexes in the AWS GovCloud (US) Regions
Published Date: 2025-01-22 18:00:00
Amazon DynamoDB now supports a new warm throughput value and the ability to easily pre-warm DynamoDB tables and indexes in the AWS GovCloud (US) Regions. The warm throughput value provides visibility into the number of read and write operations your DynamoDB tables can readily handle, while pre-warming let’s you proactively increase the value to meet future traffic demands. DynamoDB automatically scales to support workloads of virtually any size. However, when you have peak events like product launches or shopping events, request rates can surge 10x or even 100x in a short period of time. You can now check your tables’ warm throughput value to assess if your table can handle large traffic spikes for peak events. If you expect an upcoming peak event to exceed the current warm throughput value for a given table, you can pre-warm that table in advance of the peak event to ensure it scales instantly to meet demand. Warm throughput values are available for all provisioned and on-demand tables and indexes at no cost. Pre-warming your table's throughput incurs a charge. See Amazon DynamoDB Pricing page for pricing details. See the Developer Guide to learn more. ?
Amazon EC2 introduces provisioning control for On-Demand Capacity Reservations in the AWS GovCloud (US) Regions
Published Date: 2025-01-22 18:00:00
Amazon EC2 introduces new capabilities that make it easy for customers to target instance launches on their On-Demand Capacity Reservations (ODCRs). On-Demand Capacity Reservations help you reserve compute capacity for your workloads in a specified Availability Zone for any duration. You can now ensure instance launches are fulfilled exclusively by ODCRs, or prefer unutilized ODCRs before falling back to On-Demand capacity. To get started, you can specify your capacity reservation preferences for your EC2 Auto Scaling groups via the AWS Console or the AWS CLI. These preferences can also be configured using EC2 RunInstances API calls. These features are available in both of the AWS GovCloud (US) Regions. To learn more, see the Capacity Reservations user guide and EC2 Auto Scaling user guide. ?
Amazon Redshift announces support for History Mode for zero-ETL integrations
Published Date: 2025-01-22 18:00:00
Today, Amazon Redshift announces the launch of history mode for zero-ETL integrations. This new feature enables you to build Type 2 Slowly Changing Dimension (SCD 2) tables on your historical data from databases, out-of-the-box in Amazon Redshift, without writing any code. History mode simplifies the process of tracking and analyzing historical data changes, allowing you to gain valuable insights from your data's evolution over time. With history mode, you can easily run advanced analytics on historical data, build lookback reports, and perform trend analysis across multiple zero-ETL data sources, including Amazon DynamoDB, Amazon RDS for MySQL, Amazon Aurora MySQL, and Amazon Aurora PostgreSQL. By preserving the complete history of data changes without maintaining duplicate copies across data sources, history mode helps organizations meet data storage requirements while significantly reducing storage needs and operational costs. History mode is available for both existing and new integrations. You can selectively enable historical tracking for specific tables within your integration for enhanced flexibility in your data analysis. To learn more and get started with zero-ETL integration, visit the getting started guides for Amazon Redshift. For more information on history mode and its benefits, visit the documentation. ?
Amazon Connect agent workspace now supports audio optimization for Citrix and Amazon WorkSpaces virtual desktops
Published Date: 2025-01-21 21:50:00
Amazon Connect agent workspace now supports the ability to redirect audio from Citrix and Amazon WorkSpaces Virtual Desktop Infrastructure (VDI) environments to a customer service agent’s local device. Audio redirection improves voice quality and reduces latency for voice calls handled on virtual desktops, providing a better experience for both end customers and agents. For region availability, please see the availability of Amazon Connect features by Region. To learn more and get started, visit the Amazon Connect agent workspace webpage or see the help documentation.
Amazon EventBridge announces direct delivery to cross-account targets
Published Date: 2025-01-21 20:00:00
Amazon EventBridge Event Bus now allows you to deliver events directly to AWS services in another account. This feature enables you to use multiple accounts to improve security and streamline business processes while reducing the overall cost and complexity of your architecture. Amazon EventBridge Event Bus is a serverless event broker that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. This launch allows you to directly target services in another account, without the need for additional infrastructure such as an intermediary EventBridge Event Bus or Lambda function, simplifying your architecture and reducing cost. For example, you can now route events from your EventBridge Event Bus directly to a different team's SQS queue in a different account. The team receiving events does not need to learn about or maintain EventBridge resources and simply needs to grant IAM permissions to provide access to the queue. Events can be delivered cross-account to EventBridge targets that support resource-based IAM policies such as Amazon SQS, AWS Lambda, Amazon Kinesis Data Streams, Amazon SNS, and Amazon API Gateway. Direct delivery to cross-account targets is now available in all commercial AWS Regions. To learn more, please read our blog post or visit our documentation. Pricing information is available on the EventBridge pricing page.
Amazon Aurora now supports R7g and R7i database instances in Asia Pacific (Malaysia) Region
Published Date: 2025-01-21 20:00:00
AWS Graviton3-based R7g database instances as well as R7i database instances are now available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in Asia Pacific (Malaysia) Region. AWS Graviton3 instances provide up to 30% performance improvement and up to 20% price/performance improvement over Graviton2 instances for Amazon Aurora, depending on the database engine version and workload. R7i instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. You can spin up an R7g or R7i database instances in the Amazon RDS Management Console or using the AWS CLI. Upgrading a database instance to either option requires a simple instance type modification. For more details, refer to the Aurora documentation. Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
Announcing high-throughput mode for Amazon SNS FIFO Topics
Published Date: 2025-01-21 18:10:00
Amazon SNS now supports high-throughput mode for SNS FIFO topics, with default throughput matching SNS standard topics across all regions. When you enable high-throughput mode, SNS FIFO topics will maintain order within message group, while reducing the de-duplication scope to the message-group level. With this change, you can leverage up to 30K messages per second (MPS) per account by default in US East (N. Virginia) Region, and 9K MPS per account in US West (Oregon) Region and Europe (Ireland) Region, and request quota increases for additional throughput in any region. Amazon SNS FIFO topics provides message ordering, message grouping, and de-duplication when delivering to Amazon SQS queues. By default, SNS FIFO topics provide 300 MPS per message group ID, and 3K MPS per topic, and topic level de-duplication. To get higher throughput, you can distribute your messages across message groups, and enable high-throughput mode by setting the FifoThroughputScope topic attribute to MessageGroup. We now have increased default limits for SNS FIFO topics across all commercial and the AWS GovCloud (US) Regions. To get started, see the following resources:
Amazon Neptune now supports open-source GraphRAG toolkit
Published Date: 2025-01-21 18:00:00
Today, we are announcing the support of the open-source GraphRAG Toolkit, a new capability that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. The toolkit provides an open-source framework for automating the construction of a graph from unstructured data, and composing question-answering strategies that query this graph when answering user questions. Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, financial analysts can ask a financial analysis chatbot for the sales forecast of a manufacturing company. Developers building Generative AI applications can enable GraphRAG via this new open-source Python toolkit by specifying their data sources and choosing Amazon Neptune Database or Neptune Analytics as their graph store and Amazon OpenSearch serverless as the vector store. This will automatically generate and store vector embeddings in the selected vector store, along with a graph representation of entities and their relationships in the selected graph store. The GraphRAG Toolkit is an open source project. Its code base is open for inspection, modification, and extension, and is therefore highly adaptable for specific or niche requirements. With its initial release, the toolkit provides graph store implementations for both Neptune Analytics and Neptune Database, and vector store implementations for Neptune Analytics and OpenSearch Serverless, and it uses FMs hosted in Amazon Bedrock. To learn more, visit the User Guide. ?
Amazon RDS adds Oracle Database R6i SE2 License-Included option in additional regions
Published Date: 2025-01-21 18:00:00
Amazon Relational Database Service (Amazon RDS) for Oracle now offers Oracle Database Standard Edition 2 (SE2) with the License-Included (LI) purchase option in additional AWS Regions for R6i instance class. RDS for Oracle R6i LI instances are now available in Asia Pacific (Malaysia) and Canada West (Calgary). In the LI service model, you don’t need to separately purchase Oracle licenses. Amazon RDS for Oracle LI pricing includes the software license, the underlying hardware resources, and all database management capabilities. Simply launch an Oracle SE2 instance in the AWS Management Console or using the AWS CLI and specify the License-Included option. Configuration details for available instance types can be found on the Amazon RDS for Oracle Instance Types page. Amazon RDS for Oracle allows you to set up, operate, and scale Oracle database deployments in the cloud. See Amazon RDS for Oracle Pricing for up-to-date pricing and regional availability. ?
Amazon Corretto January 2025 quarterly updates
Published Date: 2025-01-21 18:00:00
On Jan 21, 2025 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) and Feature Release (FR) versions of OpenJDK. Corretto 23.0.2, 21.0.6, 17.0.14, 11.0.26, 8u442 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. Click on the Corretto home page to download Corretto 8, Corretto 11, Corretto 17, Corretto 21, or Corretto 23. You can also get the updates on your Linux system by configuring a Corretto Apt or Yum repo. Feedback is welcomed! ?
AWS Backup is now available in AWS Mexico (Central)
Published Date: 2025-01-21 18:00:00
Today, we are announcing the availability of AWS Backup in the Mexico (Central) Region. AWS Backup is a fully-managed, policy-driven service that allows you to centrally automate data protection across multiple AWS services spanning compute, storage, and databases. Using AWS Backup, you can centrally create and manage backups of your application data, protect your data from inadvertent or malicious actions with immutable recovery points and vaults, and restore your data in the event of a data loss incident. You can get started with AWS Backup using the AWS Backup console, SDKs, or CLI by creating a data protection policy and then assigning AWS resources to it using tags or Resource IDs. For more information on the features available in the Mexico (Central) Region, visit the AWS Backup product page and documentation. To learn about the Regional availability of AWS Backup, see the AWS Regional Services List.
Amazon Redshift introduces new SQL features for zero-ETL integrations
Published Date: 2025-01-21 18:00:00
Today, Amazon Redshift announced the launch of three new SQL features for zero-ETL integrations: QUERY_ALL_STATES, TRUNCATECOLUMNS, and ACCEPTINVCHARS. Zero-ETL integrations enable you to break down data silos in your organization and run timely analytics and machine learning (ML) on the data from your databases. With the launch of these new features, Amazon Redshift further enhances the functionality and reliability of zero-ETL integrations, allowing customers to work more efficiently with their data while maintaining data integrity. The new SQL features provide significant benefits and further enhance the experience of using zero-ETL integrations. QUERY_ALL_STATES allows you to query tables in all states, including during updates, ensuring continuous data availability. TRUNCATECOLUMNS automatically truncates VARCHAR data that exceeds Amazon Redshift's length limit, preventing replication errors and ensuring smoother data ingestion. ACCEPTINVCHARS enables you to replace invalid UTF-8 characters with a specified character of your choice, which is particularly useful when dealing with data from various sources that may contain non-standard characters. You can modify the existing integrations or create new ones using these features. To learn more and get started with zero-ETL integration, visit the getting started guides for Amazon Redshift. To learn more about these features, see the documentation. ?