Week 42 (14 Oct - 20 Oct)
Ankur Patel
3x AWS? certified | AWS Community Builder | Cloud Enabler and Practitioner | Solutions Architect | FullStack | DevOps | DSML | 6x Sisense certified | Blogger | Photographer & Traveller
AWS Marketplace now supports notifications for private marketplace
Published Date: 2024-10-18 17:00:00
Today, AWS Marketplace announces the general availability of private marketplace notifications, a new feature that streamlines the product request approval process for private marketplace customers. Using private marketplace, administrators can create customized catalogs of approved products from AWS Marketplace to enable their organization's AWS accounts to purchase the preselected and vetted software. This capability enables administrators and users to receive real-time notifications when a new product request is created in their private marketplace and when a request is approved or denied, simplifying the product request review and approval workflow. With private marketplace notifications, administrators and users can now receive Amazon EventBridge events when a user requests a product, and when a request is approved or declined. Customers can configure email notifications for these events using AWS User Notifications console and also leverage the product request events to initiate actions like approval workflows within procurement tools. This improved notification experience simplifies the product request approval process and ensures timely updates for all stakeholders, resulting in expedited approval and procurement in AWS Marketplace. To learn more about private marketplace notifications, visit AWS Private Marketplace Documentation . To get started setting up the private marketplace user notifications, go to AWS User Notifications .
Amazon DataZone introduces new designations in projects for members to perform specific tasks
Published Date: 2024-10-18 17:00:00
Amazon DataZone launches new project designations allowing customers to configure project members to do specific tasks while collaborating with other members in a project. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls. With the new designations in Amazon DataZone projects, team members can collaborate in a project while performing specific tasks as defined by the designation they are assigned to by the project owners. With this launch, members in a project can be a consumer, a steward, or a viewer. Based on the designation assigned, the member can browse and subscribe to assets in Amazon DataZone’s catalog as a consumer, approve and reject subscription requests to assets in the owning project as a steward, or just browse and view assets subscribed in the project they are part of as a viewer. With these defined activities for members in a project, Amazon DataZone establishes governance controls for members in a project while encouraging collaboration. Support for Amazon DataZone’s fine-grained project roles is available in all AWS Regions where Amazon DataZone is available. To learn more, visit Amazon DataZone and get started using the guide in documentation .
MemoryDB is now available in the AWS GovCloud (US) Regions
Published Date: 2024-10-18 17:00:00
Today, AWS announces the availability of Amazon MemoryDB in the AWS GovCloud (US) Regions. Amazon MemoryDB is a fully managed, Valkey and Redis OSS-compatible database for in-memory performance and multi-AZ durability. Customers in GovCloud regions can now use MemoryDB as a primary database for use cases that require ultra-fast performance and durable storage, such as payment card analytics, message streaming between microservices, and IoT events processing. With Amazon MemoryDB, all of your data is stored in memory, which enables you to achieve microsecond read and single-digit millisecond write latency and high throughput. Amazon MemoryDB also stores data durably across multiple Availability Zones (AZs) using a Multi-AZ transactional log to enable fast failover, database recovery, and node restarts. Delivering both in-memory performance and Multi-AZ durability, Amazon MemoryDB can be used as a high-performance primary database for your microservices applications eliminating the need to separately manage both a cache and durable database. To get started, you can create an Amazon MemoryDB cluster in minutes through the AWS Management Console , AWS Command Line Interface (CLI), or AWS Software Development Kit (SDK). To learn more, visit the Amazon MemoryDB product page or documentation.
Amazon QuickSight expands integration with Amazon EventBridge
Published Date: 2024-10-18 17:00:00
Amazon QuickSight now supports more events using Amazon EventBridge as an update to previously launched integration with Amazon EventBridge. By subscribing to QuickSight events in EventBridge, you can automate your workflows such as continuous deployment, and backups. These events are delivered to EventBridge in near real time. Developers can write simple rules to indicate which events are of interest to them, and what actions to take when an event matches a rule. To see the full list of supported events click here . The QuickSight integration with EventBridge is available with the Amazon QuickSight Enterprise Edition in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), China (Beijing) Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (S?o Paulo) and AWS GovCloud (US-West).
Amazon Bedrock Model Evaluation now supports evaluating custom model import models
Published Date: 2024-10-18 17:00:00
Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of automatic evaluation and human evaluation. You can use automatic evaluation with predefined algorithms for metrics such as accuracy, robustness, and toxicity. Additionally, for those metrics or subjective and custom metrics, such as friendliness, style, and alignment to brand voice, you can set up a human evaluation work?ow with a few clicks. Human evaluation workflows can leverage your own employees or an AWS-managed team as reviewers. Model evaluation provides built-in curated datasets or you can bring your own datasets. Now, customers can evaluate their own models they imported to Amazon Bedrock through the Custom Model Import feature. This allows customers to complete the cycle of selecting a base model, customizing it, evaluating it, and customizing it again if needed or continuing to production if they are satisfied with its evaluation outcome. To evaluate an imported model, simply select the custom model from the list of models to evaluate in the model selector tool when creating an evaluation job. Model Evaluation on Amazon Bedrock is now Generally Available in these commercial regions and the AWS GovCloud (US-West) Region. To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock developer experience web page . To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs. ?
QuickSight Reporting now supports triggering scheduled reports via API
Published Date: 2024-10-18 17:00:00
QuickSight Reporting has expanded its event driven reporting capabilities by adding the ability to distribute pixel-perfect reports and dashboard reports via emails using APIs. Using the AWS SDK, developers can invoke StartDashboardSnapshotJobSchedule API to run the report, which will follow the configured scheduled report settings, including export type (PDF, CSV, Excel etc.) and email set up (subject line, body text, and attachment setting). The developer needs to obtain the schedule ID of an existing report schedule in QuickSight console and provide it as an input to the API. Within the QuickSight console, running an email report on demand was already possible through the Run now option, but that required customers to be logged into QuickSight. With this launch, developers can set up automated workflows to listen for QuickSight events, such as dataset refresh events or signals specific to their business (outside of QuickSight), and then start a report schedule programmatically to deliver to pre-selected readers. Triggering email report schedule via API is now available in all supported Amazon QuickSight regions - see here for QuickSight regional endpoints. For more on how to set up this setting, go to our documentation .
Amazon QuickSight now supports programmatic export and import of shared folders
Published Date: 2024-10-18 17:00:00
Amazon QuickSight now supports programmatic export and import of shared folders as an update to previously launched StartAssetBundleExportJob and StartAssetBundleImportJob APIs. This enables you to backup and restore, continuously replicate and migrate QuickSight folders along with its member assets and sub folders. With the earlier version of these APIs, you had to managed folders deployment separately. To learn more, click here . The QuickSight export and import of shared folders is available with the Amazon QuickSight Enterprise Edition in all supported Amazon QuickSight regions - US East (Ohio and N. Virginia), US West (Oregon), Africa (Cape Town), Asia Pacific (Jakarta, Mumbai, Seoul, Singapore, Sydney and Tokyo), Canada (Central), China (Beijing) Europe (Frankfurt, Ireland, London, Milan, Paris, Stockholm, Zurich), South America (S?o Paulo) and AWS GovCloud (US-West).
AWS Data Exchange now provides APIs for data grants, enabling programmatic data sharing
Published Date: 2024-10-18 17:00:00
Today, AWS announces the availability of APIs for data grants in AWS Data Exchange , a set of common application programming interfaces (APIs) that allow you to programmatically grant time-bound, read-only data access to any other AWS account. Now, you can automate the secure exchange of data across AWS accounts using APIs. As a data host on AWS, you can programmatically create data grants to share with any other AWS account, with either a predefined duration for the data grant to remain live, or set to run in perpetuity. The recipient can then accept the data grant through APIs and access the read-only data. Data grants work with all five AWS Data Exchange supported delivery types: Data Files, Amazon S3, Amazon Redshift, AWS Lake Formation (Preview), and Amazon API Gateway. APIs for data grants in AWS Data Exchange are available in all AWS Regions where AWS Data Exchange is available. To learn more about APIs for data grants in AWS Data Exchange, visit the AWS Data Exchange API Reference . For an overview of AWS Data Exchange, visit the AWS Data Exchange User Guide . ?
Amazon EC2 High Memory instances now available in Africa (Cape Town) region
Published Date: 2024-10-18 17:00:00
Starting today, Amazon EC2 High Memory instances with 6TB of memory (u-6tb1.56xlarge and u-6tb1.112xlarge) are available in the Africa (Cape Town) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options. Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory. For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog . ?
Amazon DataZone launches support for AWS IAM Identity Center account instance
Published Date: 2024-10-17 21:11:00
Today, Amazon DataZone announced support for account instances of AWS IAM Identity Center. Amazon DataZone administrators can now setup single sign-on (SSO) users through AWS IAM Identity Center without needing to have an organization configured through AWS Organizations. As an Amazon DataZone administrator, you can now enable AWS IAM Identity Center for a single AWS account instead of the entire AWS organization. When creating an Amazon DataZone domain, choose to enable AWS IAM Identity Center for a single AWS account. With the account instance option, decide whether to allow all authorized AWS IAM Identity Center users and groups access to the domain or explicitly assign them. For example, an AWS account administrator, who doesn't have access to the management account for their organization and needs to set up SSO access, can provide access to the Amazon DataZone portal for individual users or groups in that AWS account. Amazon DataZone support of AWS IAM Identity Center account instance is available in all AWS Regions where Amazon DataZone is available. To learn more, visit Amazon DataZone , and get started with AWS IAM Identity Center account instance documentation .
Amazon DynamoDB announces user experience enhancements to organize your tables
Published Date: 2024-10-17 18:40:00
Amazon DynamoDB is excited to announce enhancements to the DynamoDB console that enable customers to easily find frequently used tables. Now, customers can favorite their tables in the console’s tables page for quicker table access. Customers can click the favorites icon to view their favorited tables in the console’s tables page. With this update, customers have a faster and more efficient way to find and work with tables that they often monitor, manage, and explore. The favorite tables console experience is now available in all AWS Regions at no additional cost. Customers can start using favorite tables immediately. Get started with creating a DynamoDB table from the AWS Management Console .
Amazon Aurora PostgreSQL now supports local write forwarding
Published Date: 2024-10-17 18:00:00
Amazon Aurora PostgreSQL-Compatible Edition now lets you forward write requests from Aurora read replicas to the writer instance, simplifying scaling read workloads that require read-after-write consistency. With this launch, local write forwarding is now available for both Aurora MySQL and Aurora PostgreSQL. With write forwarding, your applications can simply send both read and write requests to a read replica, and Aurora will take care of forwarding the write requests to the writer instance in your cluster. This way your applications can scale read workloads on Aurora Replicas without the need to maintain complex application logic to separates reads from writes. You can also select from different consistency levels to meet your application read-after-write consistency needs. Local write forwarding is supported on Aurora PostgreSQL versions 14.13, 15.8, 16.4 or higher. You can enable the feature using the AWS Management Console, Command Line Interface (CLI), or API by turning on the "local write forwarding" option. See our documentation to learn more. Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. To get started with Amazon Aurora, take a look at our getting started page .
Amazon RDS Multi-AZ deployment with two readable standbys now supports AWS IAM database authentication
Published Date: 2024-10-17 18:00:00
Amazon Relational Database Service (Amazon RDS) Multi-AZ deployments with two readable standbys now supports using AWS Identity and Access Management (IAM) for database authentication. With IAM support, you can now centrally manage access to your RDS Multi-AZ deployments with two readable standbys along with other RDS deployments, instead of managing access individually. In addition, AWS IAM eliminates the need for storing password-based login credentials in the database. Amazon RDS Multi-AZ deployments with two readable standbys is ideal when your workloads require lower write latency and more read capacity. This deployment option also supports minor version upgrades and system maintenance updates with typically less than one second of downtime when using Amazon RDS Proxy or open source tools such as AWS Advanced JDBC Driver, PgBouncer, or ProxySQL. To learn more about IAM authentication support, see settings for creating Multi-AZ DB clusters in the Amazon RDS User Guide. For a full list of the Amazon RDS Multi-AZ with two readable standbys regional availability and supported engine versions, see supported Regions and DB engines for Multi-AZ DB clusters in Amazon RDS in the Amazon RDS User Guide. You can create or update fully managed Amazon RDS Multi-AZ databases with two readable standby instances in the Amazon RDS Management Console .
AWS Lambda console now supports real-time log analytics via Amazon CloudWatch Logs Live Tail
Published Date: 2024-10-17 17:00:00
The AWS Lambda console now supports Amazon CloudWatch Logs Live Tail , an interactive log streaming and analytics capability which provides real-time visibility into logs, making it easier to develop and troubleshoot Lambda functions. Customers building serverless applications using Lambda want visibility into the behavior of their Lambda functions in real time. For example, developers want to instantly see the result of their code or configuration changes, and operators want to quickly troubleshoot any critical issues which would prevent the function from operating smoothly. Previously, you had to visit the CloudWatch console to access detailed Lambda function logs or real-time log streams. Now, with Live Tail in Lambda console, you can view and analyze Lambda logs in real time as they become available. This makes it easier for developers to quickly test and validate code or configuration changes in real time, accelerating the author-test-deploy cycle (also known as the “inner dev loop”) when building applications using Lambda. The Live Tail experience also makes it easier and faster for operators and DevOps teams to detect and debug failures and critical errors in Lambda function code, reducing the mean time to recovery (MTTR) when troubleshooting Lambda function errors. To get started, visit the Lambda console and click “Open CloudWatch Live Tail” button in the code editor. To learn more, visit the launch blog post and Lambda developer guide . The Live Tail experience in Lambda console is available in all commercial AWS Regions where Lambda and CloudWatch Logs are available. For more information, see the AWS Region table .
Ubuntu Pro for EC2 Spot Instances
Published Date: 2024-10-17 17:00:00
Starting today, you can launch Amazon EC2 Spot Instances using Ubuntu Pro based Amazon Machine Images (AMIs). You can now easily deploy Ubuntu Pro Spot instances and get five additional years of security updates from Canonical. You will be charged on a per-second basis for Ubuntu Pro EC2 AMI instances. For any new Ubuntu Pro EC2 AMI deployments, you will now see Ubuntu Pro charges in the Elastic Compute Cloud section of your AWS bill. Amazon EC2 Spot Instances let you take advantage of unused EC2 capacity available in the AWS cloud. Spot Instances are available at up to a 90% discount compared to On-Demand prices. You can use Spot Instances for various stateless, fault-tolerant, or flexible applications such as big data, containerized workloads, CI/CD, web servers, high-performance computing (HPC), and other test & development workloads. Spot Instances are easy to launch, scale, and manage through AWS services like Amazon ECS and Amazon EMR, or integrated third parties like Terraform and Jenkins. Spot Instances can be launched via RunInstances API with a single additional parameter. You can also provision compute capacity across Spot Instances, RIs, and On-Demand instances to optimize performance and cost using EC2 Fleet and Auto Scaling Groups APIs. To learn more about Amazon EC2 Spot Instances, visit Amazon EC2 Spot page or technical documentation .
AWS Lambda console now surfaces key function insights via built-in Amazon CloudWatch Metrics Insights dashboard
Published Date: 2024-10-17 17:00:00
The AWS Lambda console now surfaces key metrics about Lambda functions in your AWS account via a built-in Amazon CloudWatch Metrics Insights dashboard, enabling you to easily identify and troubleshoot the source of errors or performance issues. To efficiently operate distributed serverless applications built using Lambda, it is crucial to easily identify the source of errors or performance anomalies, such as spike in critical metrics like errors or invocation duration for Lambda functions in your AWS account. Previously, you had to navigate to CloudWatch console and query metrics or create custom dashboards, which caused context switching and added friction for operators and DevOps teams to effectively monitor and optimize Lambda-based applications. Now, the Lambda console features a new built-in dashboard, which leverages CloudWatch Metrics Insights capability and provides you with instant visibility into the following critical insights — most-invoked Lambda functions, functions with highest number of errors, and functions taking the longest to run. This reduces friction due to context switching and enables your operator teams to easily identify and fix the source of errors or performance anomalies without leaving the Lambda console. To get started, simply navigate to the "Dashboard" page in the Lambda console to access the insights surfaced by Metrics Insights dashboard. To learn more, visit the launch blog post . The Metrics Insights dashboard in Lambda console is available in all commercial AWS Regions where Lambda and CloudWatch metrics are available, including the AWS GovCloud (US) Regions, at no additional cost. For more information, see the AWS Region table .
QuickSight now supports subfolders in restricted folders to enable governed data sharing
Published Date: 2024-10-17 17:00:00
Amazon QuickSight now supports subfolders in restricted folders for asset organization and permissions management. QuickSight assets created in restricted folders and subfolders cannot be removed from the folder tree, creating a data sharing boundary. Enterprise administrators can deploy restricted folders and subfolders to govern sharing of data in business intelligence assets across their organization. With this launch, users with the folder Contributor permission can create content in restricted folders and subfolders but cannot manage permissions on folders and assets contained in the restricted folder. Additionally, administrators can now use the QuickSight RestoreAnalysis API to restore deleted analyses into a restricted folder. Users with the Contributor permission can create content in restricted folders and subfolders. Administrators can set Viewer and Contributor permissions for users and groups on folders and subfolders. This enables a subset of content to be shared with specific users. For example, data sources can be created in a restricted subfolder with Viewer permissions for analysts. They can use these data sources to create Datasets, Topics and Analyses in another subfolder where they have Contributor permissions. Dashboards can be published in another subfolder where a broader audience of business users have the Viewer permission. Restricted folder subfolders and the RestoreAnalysis to folder API are available in all AWS Regions where Amazon QuickSight is available. To learn more, see Organizing assets into folders for Amazon QuickSight .
Amazon S3 adds new Region and bucket name filtering for the ListBuckets API
Published Date: 2024-10-16 21:55:00
Amazon S3 now supports AWS Region and bucket name filters for the ListBuckets API. In addition, paginated ListBuckets requests now return your S3 general purpose buckets and their corresponding AWS Regions in the response, helping you simplify applications that need to determine bucket locations across multiple Regions. To get started, specify an AWS Region like “us-east-1” as a query parameter on a ListBuckets request to list your buckets in a particular Region. When using the bucket name filtering query parameter, you can also specify bucket name prefixes like "amzn-s3-demo-bucket" to return all of your bucket names that start with "amzn-s3-demo-bucket..." These new parameters can help you limit your ListBuckets API response to your desired buckets. The ListBuckets API support for Region and bucket name prefix query parameters is now available in all AWS Regions. You can use the AWS SDK, API, or CLI to list your buckets for a specific AWS Region or prefix. To learn more about the ListBuckets API, visit the documentation .
AWS Marketplace enables self-service creation of single AMI product listings for AWS GovCloud (US) Regions
Published Date: 2024-10-16 21:50:00
AWS Marketplace now allows sellers to manage their Single Amazon Machine Images (AMIs) product availability in the AWS GovCloud (US) Regions through a self-service experience. This makes the listing process easier and faster for AWS Marketplace sellers to sell software into the AWS GovCloud (US) Regions. Starting today, eligible AWS Marketplace sellers can self-serve creating or modifying products to make their Single AMI products available to customers in AWS GovCloud (US) Regions. Sellers can choose us-gov-east-1 and us-gov-west-1 regions when choosing region availability in AWS Marketplace Management Portal. To get started, sellers must have an AWS GovCloud (US) account and work with AWS Marketplace Seller Operations team to enable their selling account to list in the AWS GovCloud (US) Regions. Then, sellers can go to AWS Marketplace Management Portal to create or modify their Single AMI products to make them available in the AWS GovCloud (US) Regions. To learn more, review the Blog Post on how to list in GovCloud here .
AWS Marketplace now supports offers in four new currencies and non-US bank accounts for disbursement
Published Date: 2024-10-16 20:00:00
AWS Marketplace announces support for sellers and channel partners to create contract pricing private offers in four new currencies, and choose non-US bank accounts for disbursement. These features make it easier for sellers and buyers to do business globally by simplifying funds flow. Sellers can now create private offers with contract pricing in EUR, GBP, JPY, and AUD and receive their disbursements in the offer currency. Additionally, sellers are no longer required to have a US-domiciled bank account. Instead, they can choose to receive payments into one or more bank accounts located in any seller eligible jurisdiction . For channel partner private offers (CPPO), the seller, channel partner, and buyer must all transact in the same currency. Sellers need to issue a resale authorization in the negotiated currency, and the channel partner then creates the CPPO in that currency. These capabilities help AWS Marketplace sellers achieve an expanded global reach and simplified cash flow management using local bank accounts in local currency. For AWS Marketplace buyers, these features provide the ability to procure software and services in their preferred currency and eliminate foreign exchange risk in invoice amounts. This new functionality is available worldwide for all AWS Marketplace sellers for contract-based private offers. Public offers and private offers with consumption pricing remain in USD only. To get started, sellers need to provide bank accounts with SWIFT codes and associate currency preferences. To learn more, please visit the documentation on local currency offers and disbursements. ?
Amazon Transcribe now supports streaming transcription in 30 additional languages
Published Date: 2024-10-16 19:30:00
Today, we are excited to announce support for 30 additional languages for streaming audio transcriptions bringing the total number of supported languages to 54. Amazon Transcribe is an automatic speech recognition (ASR) service that makes it easy for you to add speech-to-text capabilities to your applications. New languages supported with this release include Afrikaans, Amharic, Arabic (Gulf), Arabic (Standard), Basque, Catalan, Croatian, Czech, Danish, Dutch, Farsi, Finnish, Galician, Greek, Hebrew, Indonesian, Latvian, Malay, Norwegian, Polish ,Romanian, Russian, Serbian, Slovak, Somali, Swedish, Tagalog, Ukrainian, Vietnamese, and Zulu. These new languages expand the coverage of Amazon Transcribe streaming and enable customers to reach a broader global audience. Live streaming transcription is used across industries in contact center applications, broadcast events, meetings captions, and e-learning. For example, contact centers use transcription to remove the need for note taking and improve agent productivity by providing recommendations for next best action. Companies also make their live sports events or real-time meetings more accessible with automatic subtitles. In addition, customers who have a large social media presence use Amazon Transcribe to help moderate content and detect inappropriate speech in user-generated content. Amazon Transcribe real-time streaming is available in the following AWS regions: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Seoul), Asia Pacific (Sydney), Asia Pacific (Tokyo), Africa (Cape Town), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), South America (S?o Paulo), AWS GovCloud (US-East) and AWS GovCloud (US-West). To learn more, visit Amazon Transcribe documentation or or visit the AWS console .
AWS Beanstalk adds support for Python 3.12
Published Date: 2024-10-16 17:25:00
AWS Beanstalk adds support for Python 3.12 on AL2023 Beanstalk environments. AWS Elastic Beanstalk is a service that provides the ability to deploy and manage applications in AWS without worrying about the infrastructure that runs those applications. Python 3.12 on AL2023 adds improved error messaging, Linux perf profiler support, improved speed of interpreter and usability of f-strings. This platform is generally available in commercial regions where Elastic Beanstalk is available including AWS GovCloud (US) Regions. For a complete list of regions and service offerings, see AWS Regions . For more information about Python and Linux Platforms, see the Elastic Beanstalk developer guide . To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page .
AWS CloudShell now supports Amazon Q CLI
Published Date: 2024-10-16 17:00:00
Today, we are announcing the integration of Amazon Q CLI into CloudShell, the embedded terminal experience in the AWS Management Console. The command line is used by over thirty million engineers to write, build, run, debug, and deploy software. However, despite how critical it is to the software development process, the command line is challenging to use. Amazon Q CLI allows you to use natural language to generate AWS commands and provides personalized command suggestions, reducing the need to search documentation and boosting productivity. Many of you prefer using a shell interface to interact with cloud resources but often encounter a learning curve with the command syntax. With tens of thousands of command line applications (called as command-line interfaces or CLIs), it’s almost impossible to remember the correct input syntax. The command line’s lack of input validation also means that typos can cause unnecessary errors, security risks, and even production outages. It’s no wonder that most software engineers find the command line an error-prone and often frustrating experience. Integrating Amazon Q CLI helps bridge this gap by modernizing the command line with features such as, personalized command suggestions, inline documentation, and AI natural-language-to-code translation. CloudShell supports Amazon Q CLI in the 24 commercial regions where CloudShell is available. For more information about the AWS Regions where CloudShell is available, see the AWS Region table. To get started, you can open CloudShell from the Console Toolbar on any page of the AWS Management Console and use a trigger such as, “q chat” to begin a conversation.
Amazon Bedrock Agents now provides Conversational Builder
Published Date: 2024-10-16 17:00:00
Today, AWS announces the general availability of Conversational Builder for Amazon Bedrock Agents which provides a chat interface for you to use to build your Bedrock Agents. With the Conversational Builder, you can chat with an assistant that will guide you through building an agent and create your agent based off natural language instructions. The Conversational Builder is available to be used through the Amazon Bedrock Agents management console. Conversational Builder is an alternative experience over the traditional manual configuration methods for building an agent that reduces the time of your agent creation and prototyping process. You can describe what you want your agent to do - e.g. “build a customer service agent that answers questions on shopping” and Conversational Builder will automatically generate the necessary configurations for you to test out your agent. This new feature is available in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Paris), Europe (Frankfurt) where Amazon Bedrock Agents is available. ?
Amazon Managed Service for Prometheus now supports configuring a minimum firing period for alerts
Published Date: 2024-10-16 17:00:00
Amazon Managed Service for Prometheus now supports the ability to configure the minimum duration for which an alert remains active, after the condition that triggered the alert is no longer valid. Amazon Managed Service for Prometheus is a fully managed Prometheus-compatible monitoring service that makes it easy to monitor and alarm on operational metrics at scale. Prometheus is a popular Cloud Native Computing Foundation open-source project for monitoring and alerting on metrics from compute environments such as Amazon Elastic Kubernetes Service . Using a minimum firing period enables you to maintain alerts in active state until the problem is fully resolved, regardless of short-term data changes. It also reduces alert noise and prevents alerts from constantly switching between “firing” and “resolved” states. This feature is now available in all AWS regions where Amazon Managed Service for Prometheus is generally available . Check out the Amazon Managed Service for Prometheus user guide for detailed documentation. To learn more about Amazon Managed Service for Prometheus, visit the product page and pricing page . ?
Announcing AWS DMS Serverless support for MongoDB and DocDB as a source
Published Date: 2024-10-16 17:00:00
AWS Database Migration Service Serverless (AWS DMSS) now supports MongoDB and Amazon DocDB as data sources. Using AWS DMSS, you can now migrate data from MongoDB and Amazon DocDB to a variety of data targets. AWS DMSS now shows MongoDB and Amazon DocDB as options when defining endpoints which can then be used as sources for data migrations. Additional information about AWS DMSS sources can be found in our documentation . To learn more about DMS Serverless, see?Working with AWS DMS Serverless . For AWS DMS regional availability, please refer to the AWS Region Table .
Amazon Corretto October, 2024 Quarterly Updates
领英推荐
Published Date: 2024-10-16 17:00:00
On October 15, 2024 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) and Feature Release (FR) versions of OpenJDK. Corretto 23.0.1, 21.0.5, 17.0.13, 11.0.25, 8u432 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. Click on the Corretto home page to download Corretto 8, Corretto 11, Corretto 17, Corretto 21, or Corretto 23. You can also get the updates on your Linux system by configuring a Corretto Apt or Yum repo . Feedback is welcomed ! ?
Amazon EC2 C7i-flex instances are now available in additional AWS Regions
Published Date: 2024-10-15 20:00:00
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex instances that deliver up to 19% better price performance compared to C6i instances, are available in Asia Pacific (Seoul) and Europe (Frankfurt) regions. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i. C7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances. C7i-flex instances are available in the following AWS Regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Mumbai, Seoul, Singapore, Sydney, Tokyo), and South America (S?o Paulo). To learn more, visit Amazon EC2 C7i-flex instances .
AWS CodePipeline supports automatic retry on stage failure
Published Date: 2024-10-15 18:00:00
AWS CodePipeline V2 type pipelines introduces the ability to automatically retry a stage if there is a failure in the stage. A stage fails if any action in the stage fails. To use automatic retry, set “Retry” as the result for the on failure lifecycle event of a stage, and optionally configure the flag to retry the stage from the first action or from the failed actions. When a pipeline execution fails any action in the stage, then the pipeline execution will be retried in the stage once. Automatic retry can be useful for a stage with actions that can experience transient errors. Instead of failing the pipeline execution, you can automatically retry the pipeline execution in the failed stage. To learn more about automatically retrying a stage on failure in your pipeline, visit our documentation . For more information about AWS CodePipeline, visit our product page . The retry stage feature is available in all regions where AWS CodePipeline is supported. ?
Amazon RDS now supports M7g and R7g database instances in additional AWS Regions
Published Date: 2024-10-15 17:00:00
Amazon Relational Database Service (Amazon RDS) for PostgreSQL, MySQL, and MariaDB now support AWS Graviton3-based M7g database instances in Europe (Paris), and R7g database instances in Asia Pacific (Hong Kong) and Europe (Milan). With this regional expansion, Graviton3 database instances are now available for Amazon RDS in 21 regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Asia Pacific (Hong Kong, Hyderabad, Malaysia, Mumbai, Seoul, Singapore, Sydney, Tokyo), Canada (Central), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm), and Middle East (Bahrain). For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page . M7g and R7g database instances are available on Amazon RDS for PostgreSQL version 16.1 and higher, 15.2 and higher, 14.5 and higher, and 13.8 and higher. M7g and R7g database instances are available on Amazon RDS for MySQL version 8.0.28 and higher, and Amazon RDS for MariaDB version 10.11.4 and higher, 10.6.10 and higher, 10.5.18 and higher, and 10.4.27 and higher. For more details on these instances and supported versions for each region, refer to the Amazon RDS User Guide . Get started by creating a fully managed M7g or R7g database instance using the Amazon RDS Management Console . ?
Finch expands support to Linux, streamlining container development across platforms
Published Date: 2024-10-15 17:00:00
Today, AWS announced the general availability of Linux support for Finch, an open source command line tool that allows developers to build, run, and publish Linux containers. Finch simplifies container development by bundling a minimal native client with a curated selection of open-source components, allowing developers to build and manage containers without the hassle of managing intricate details. With the addition of Linux support, Finch now provides a consistent and streamlined container development experience across all major operating systems. Developers can leverage the same familiar Finch commands to build, run, and publish their containers, whether they are working on Linux, macOS, or Windows. This allows teams to standardize their container workflows and tooling, improving productivity and collaboration. In addition to the expanded platform support, Finch also integrates with the Finch Daemon, which provides a subset of the Docker API specification. The Finch Daemon allows customers who rely on the Docker REST API to continue using it programmatically across all Finch-supported environments. While the Finch Daemon currently covers a core set of Docker APIs, we are actively working with the community to expand its functionality over time. Finch's Linux support is available as RPM packages for Amazon Linux 2 and Amazon Linux 2023, which can be easily installed from the YUM repositories. Users of other Linux distributions can also try out Finch by following the instructions available on the project's website and GitHub repository . To learn more about using Finch on Linux, read the AWS News Blog .
Amazon AppStream 2.0 now supports custom shared network storage
Published Date: 2024-10-15 17:00:00
Amazon AppStream 2.0 now supports custom shared network storage as a new storage option for your Windows AppStream 2.0 users. With the launch of this feature, users can easily access and collaborate on shared files without transferring files manually. The shared network storage is implemented as an SMB (Server Message Block) network drive. When administrators enable and map these SMB network drives, multiple users can access the same data during their AppStream 2.0 sessions. Changes made to the shared files are automatically backed up and synchronized. Additionally, the feature provides the advantage of scalable, shared storage resources, optimizing customer storage usage and efficiency. Centralized management of access controls and permissions can enhance data security of your organization. This feature is available at no additional cost in all the AWS Regions where Amazon AppStream 2.0 is available. Users can connect to AppStream 2.0 through a web browser or a Windows client application to access their shared storage. AppStream 2.0 offers pay-as-you go pricing . To get started with AppStream 2.0, see Getting Started with Amazon AppStream 2.0 . To enable this feature for your users, you must use an AppStream 2.0 image that uses AppStream 2.0 agent released on or after September 18, 2024 or your image is using Managed AppStream 2.0 image updates released on or after September 20, 2024. For more information about shared storage function, see or documentation . ?
Amazon RDS now supports 1-click connectivity to EC2 instances in the AWS GovCloud (US) Regions
Published Date: 2024-10-15 17:00:00
Amazon Relational Database Services (Amazon RDS) and Amazon Aurora databases now support 1-click connection to an Amazon Elastic Compute Cloud (Amazon EC2) compute instance during database creation in AWS GovCloud (US) Regions. When provisioning a database using the Amazon RDS console, you now have the option to select an EC2 instance and with a single click establish connectivity between the database and the EC2 instance, following AWS recommended best practices . Amazon RDS automatically sets up your VPC and related network settings during database creation to enable a secure connection between the EC2 instance and the RDS database. This eliminates the additional networking tasks such as setting up a VPC, security groups, subnets, and ingress/egress rules manually to establish a connection between your application and database. It improves productivity for new users and application developers who can now quickly launch a database instance and seamlessly connect to an application on a compute instance within minutes. 1-click connectivity between an Amazon RDS database and an EC2 instance is now available in all commercial AWS Regions and the AWS GovCloud (US) Regions. Learn more about setting up connectivity to a compute resource from your RDS or Aurora database in the Amazon RDS User Guide and the Amazon Aurora User Guide .
AWS announces general availability of Amazon DynamoDB zero-ETL integration with Amazon Redshift
Published Date: 2024-10-15 17:00:00
Amazon DynamoDB zero-ETL integration with Amazon Redshift is now generally available, enabling customers to run high-performance analytics on their DynamoDB data in Amazon Redshift with no impact on production workloads running on DynamoDB. As data is written into a DynamoDB table, it is seamlessly made available in Amazon Redshift, eliminating the need for customers to build and maintain complex data pipelines for performing extract, transform, and load (ETL) operations. You can create zero-ETL integration on a Amazon Redshift Serverless workgroup or Amazon Redshift provisioned cluster using RA3 instance types. Zero-ETL integrations help you derive holistic insights across many applications, break data silos in your organization, and gain significant cost savings and operational efficiencies. Now you can run enhanced analysis on your DynamoDB data with the rich capabilities of Amazon Redshift, such as high performance SQL, built-in ML and Spark integrations, materialized views with automatic and incremental refresh, data sharing, and ability to join data across multiple data stores and data lakes. Amazon DynamoDB zero-ETL integration with Amazon Redshift is available in commercial regions and the AWS GovCloud (US) Regions. You can create and manage integrations using either the AWS Management Console, the AWS Command Line Interface (CLI), or the Amazon Redshift APIs. To learn more, visit the getting started guides for DynamoDB and Amazon Redshift . ?
Amazon RDS for MariaDB now supports MariaDB 11.4 with new password validation options
Published Date: 2024-10-15 17:00:00
Amazon RDS for MariaDB now supports MariaDB major version 11.4, the latest long-term maintenance release from the MariaDB community. Amazon RDS for MariaDB 11.4 now supports the Simple Password Check Plugin, and Cracklib Password Check Plugin for password validation. You can use these plugins together, or individually to enforce the security policies appropriate for your organization. MariaDB 11.4 major version also includes improvements to database-level privileges, replication, and the InnoDB storage engine made by the MariaDB community. Learn more about these community enhancements in the MariaDB 11.4 release notes . You can leverage Amazon RDS Managed Blue/Green deployments to upgrade your databases to RDS for MariaDB 11.4. Learn more about upgrading your database instances, including Managed Blue/Green deployments, in the Amazon RDS User Guide . Amazon RDS for MariaDB 11.4 is now available in all AWS Commercial and the AWS GovCloud (US) Regions. Amazon RDS for MariaDB makes it straightforward to set up, operate, and scale MariaDB deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MariaDB . Create or update a fully managed Amazon RDS for MariaDB 11.4 database in the Amazon RDS Management Console . ?
Amazon SageMaker Studio notebooks now support G6e instance types
Published Date: 2024-10-15 17:00:00
We are pleased to announce general availability of Amazon EC2 G6e instances on SageMaker Studio notebooks. Amazon EC2 G6e instances are powered by up to 8 NVIDIA L40s Tensor Core GPUs with 48 GB of memory per GPU and third generation AMD EPYC processors. G6e instances deliver up to 2.5x better performance compared to EC2 G5 instances. Customers can use G6e instances to interactively test model deployment and for interactive model training use cases such as generative AI fine-tuning. You can use G6e instances to deploy large language models (LLMs) with up to 13B parameters and diffusion models for generating images, video, and audio. Amazon EC2 G6e instances are available for SageMaker Studio notebooks in the AWS US East (N. Virginia and Ohio) and US West (Oregon) regions. Visit developer guides for instructions on setting up and using JupyterLab and CodeEditor applications on SageMaker Studio .
Amazon EFS now supports up to 60 GiB/s (a 2x increase) of read throughput
Published Date: 2024-10-15 17:00:00
Amazon Elastic File System (Amazon EFS) has increased the maximum filesystem read throughput to 60 GiB/s (a 2x increase). Amazon EFS provides serverless, fully elastic file storage that makes it simple to set up and run file workloads in the AWS cloud. In August 2024, we increased the maximum Elastic Throughput limits to 30 GiB/s read to support the growing throughput demand for AI and machine learning workloads. Now, we are further increasing the read throughput limit to 60 GiB/s, extending EFS's simple, fully elastic, and provisioning-free experience to support throughput-intensive AI and machine learning workloads for model training, inference, financial analytics, and genomic data analysis. The increased throughput limits are immediately available for all EFS file systems using the Elastic Throughput mode. EFS file systems in the US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Dublin), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore) Regions now support up to 60 GiB/s read throughput. All other AWS Regions now support up to 10 GiB/s read throughput (previously 3 GiB/s). To learn more, see the Amazon EFS Documentation or create a file system using the Amazon EFS Console, API, or AWS CLI. ?
Amazon Aurora PostgreSQL zero-ETL integration with Amazon Redshift now generally available
Published Date: 2024-10-15 17:00:00
Now generally available, Amazon Aurora PostgreSQL zero-ETL integration with Amazon Redshift enables near real-time analytics and machine learning (ML) using Amazon Redshift to analyze petabytes of transactional data from Aurora. Within seconds of transactional data being written into Amazon Aurora PostgreSQL-Compatible Edition, zero-ETL seamlessly makes the data available in Amazon Redshift, removing the need to build and manage complex data pipelines that perform extract, transform, and load (ETL) operations. Aurora PostgreSQL zero-ETL integration with Amazon Redshift is available for Aurora provisioned clusters and Amazon Aurora Serverless v2 clusters. Zero-ETL integration can be used to send data to Amazon Redshift Serverless workgroups and Amazon Redshift provisioned clusters using RA3 instance types. Zero-ETL integration provides expanded logical replication support for Data Definition Language (DDL) events and data types, including The Oversized-Attribute Storage Technique (TOAST) support. You can replicate data from multiple logical Aurora PostgreSQL databases using a single zero-ETL integration. Data filtering capabilities allow you to specify resources to replicate at the database, schema, or table level. Enhance your data analysis with the rich analytical capabilities of Amazon Redshift, including high-performance SQL, built-in ML and Spark integrations, materialized views, data sharing, data masking, and direct access to multiple data stores and data lakes.?Automatically deploy and manage zero-ETL integrations using AWS CloudFormation. Aurora PostgreSQL zero-ETL integration with Amazon Redshift is available for Aurora PostgreSQL version 16.4 and higher. You can create and manage integrations using the AWS Management Console, the AWS Command Line Interface (CLI), or the Amazon Relational Database Service (Amazon RDS) API in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Hong Kong), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) Regions. To learn more, visit?Aurora zero-ETL integration?with Amazon Redshift and the getting started guides for Aurora and Amazon Redshift .
Amazon Q Business launches ability to use connector metadata to improve search relevance
Published Date: 2024-10-15 17:00:00
Amazon Q Business is a fully managed, generative AI–powered assistant that enhances workforce productivity by answering questions, providing summaries, generating content, and completing tasks based on customers’ enterprise data. Q Business supports over 40 connectors that customers can use to automatically sync data from selected data sources, so that they can securely search through the most up-to-date content. When you connect Amazon Q Business to your data, your data source connector crawls relevant metadata or attributes associated with a document. Starting today, you can enable Q Business to use the connector metadata to get more relevant responses for user queries. For example, to answer a user query such as “List the documents authored by John Doe in September 2024”, when metadata search is enabled for the application, Q Business will use two connector metadata fields, authors and created_at, to provide relevant responses. The metadata search feature is available for all supported connectors in all AWS Regions where Amazon Q Business is available. To learn more, visit the documentation . To explore Amazon Q Business, visit the website . ?
Amazon EC2 G6 instances now available in additional regions
Published Date: 2024-10-15 17:00:00
Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are now available in Europe (Zurich, Stockholm), Asia Pacific (Mumbai, Sydney), and South America (Sao Paulo) regions. G6 instances can be used for a wide range of graphics-intensive and machine learning use cases. Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage. Amazon EC2 G6 instances are available today in the AWS US East (N. Virginia and Ohio) , US West (Oregon), Europe(Frankfurt, London, Spain, Stockholm and Zurich), Asia Pacific (Mumbai, Tokyo, Malaysia and Sydney) , South America (Sao Paulo) and Canada (Central) regions. Customers can purchase G6 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans. To get started, visit the AWS Management Console , AWS Command Line Interface (CLI), and AWS SDKs. To learn more, visit the G6 instance page .
Embed Amazon Q Business into your application’s user interface
Published Date: 2024-10-15 17:00:00
Thousands of enterprises use Amazon Q Business today to empower their employee’s to be more creative, data-driven, and productive. Now, application developers can extend the power of Amazon Q Business to their end users by embedding an AI-powered assistant into their user interface. This new feature of Amazon Q Business offers a no-code setup process, where application developers quickly index their application data, technical documentation, and public website content. Once data is indexed, authenticated end users who are logged into an application can use the assistant to summarize projects, ask UI navigation questions, or get answers to technical support questions. Customer data is isolated across the data ingestion, indexing, and querying workflows to prevent data exposure to unauthorized parties. This enables software vendors to create an assistant that recognizes the end user, their application instance, and designated permissions. This new feature inherits the same security, privacy, and guardrails as Amazon Q Business, saving developers costly resources spent on building an assistant on their own. The new feature is available in all AWS Regions where Amazon Q Business is available. To learn more about Amazon Q Business and how to embed this generative AI-powered assistant into your application, visit the service webpage .
AWS CodeBuild now supports managed network access control lists
Published Date: 2024-10-15 17:00:00
AWS CodeBuild now supports managed network access control lists (NACLs) for reserved capacity fleets. Customers can define rules to control network traffic in and out of their build environment. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages ready for deployment. NACLs are an optional layer of security for your fleet that acts as a firewall for controlling traffic in and out of your build environment. Customers using reserved capacity can configure rules to allow or deny traffic for external sites. Builds running on the fleet will route their network traffic through a CodeBuild managed proxy server. This feature is available in US East (N. Virginia), US East (Ohio), US West (Oregon), South America (Sao Paulo), Asia Pacific (Singapore), Asia Pacific (Tokyo), Asia Pacific (Sydney), Asia Pacific (Mumbai), Europe (Ireland), and Europe (Frankfurt) where reserved capacity fleets are supported. To learn more about CodeBuild’s support for managed NACLs, please visit our documentation . To learn more about how to get started, visit the AWS CodeBuild product page . ?
Amazon SES enhances configurability with maximum delivery time for emails
Published Date: 2024-10-15 17:00:00
Amazon Simple Email Service (SES) now offers a new delivery option that allows you to set a custom maximum delivery time for your emails. With this feature, you can define how long SES should attempt to deliver emails that encounter temporary issues such as soft bounces, with options ranging from 5 minutes to 14 hours. The maximum delivery time setting enables you to effectively control your email delivery strategy, ensuring that emails are sent within a timeframe that suits your business needs. This is particularly relevant for time-sensitive emails such as one-time passwords, or when sending email to regions where overnight deliveries are likely to result in complaints. By setting a shorter retry delivery window, you can protect customer satisfaction by ensuring that recipients only receive emails that are timely and relevant. You can now set the maximum delivery time in all AWS Regions where Amazon SES is offered. To learn more, see the Amazon SES documentation for creating configuration sets .
Amazon EC2 Dedicated Hosts now supports live migration-based host maintenance
Published Date: 2024-10-15 17:00:00
Amazon EC2 Dedicated Hosts now supports live migration for host maintenance to improve application uptime and reduce your operational effort. In the event a host requires maintenance, AWS will allocate a replacement Dedicated Host and move your instances to the new host. You are not required to take any action prior to, during, or after live migration. Amazon EC2 Dedicated Hosts are physical servers fully dedicated for your use. Bring-your-own-license (BYOL) customers can use Dedicated Hosts to reduce costs for commercial software workloads like Microsoft SQL Server. AWS regularly monitors the health of your hosts. In the rare event of a degradation or for planned host maintenance, AWS will move your instances to the replacement host without requiring your instances to be stopped or rebooted. This feature is available in all commercial AWS Regions. To learn more, see Host maintenance in the Amazon EC2 User Guide.
Amazon OpenSearch Serverless now available in Asia Pacific (Seoul) and Europe (Zurich) regions
Published Date: 2024-10-15 17:00:00
We are excited to announce that Amazon OpenSearch Serverless is expanding availability to the Amazon OpenSearch Serverless to Asia Pacific (Seoul) and Europe (Zurich) regions. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless’ compute capacity used for data ingestion, search, and query is measured in OpenSearch Compute Units (OCUs). You will be able to configure maximum number of OCUs per account to to control costs. The support for OpenSearch Serverless is now available in 16 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), South America (Sao Paulo), Canada Central (Montreal), Asia Paciific (Seoul). Europe (Zurich), and AWS GovCloud (US-West). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation. ?
AWS access portal now offers streamlined sign in for AWS Console Mobile App
Published Date: 2024-10-15 17:00:00
AWS IAM Identity Center now provides customers with streamlined first-time access to the AWS Console Mobile Application, reducing the required user actions by more than half. In the past, AWS IAM Identity Center customers who wanted to access the AWS Console Mobile Application were required to find and manually enter their AWS access portal sign in URL. With this release, users can scan a QR code on their AWS Access Portal page using their mobile device. An application link takes them directly to the AWS Console Mobile Application and pre-populates the sign in URL for their AWS access portal. The Console Mobile Application lets users view and manage a select set of resources to stay informed and connected with their AWS resources while on-the-go. The sign in process supports device password managers and biometrics authentication, making access to AWS resources simple, secure, and quick. IAM Identity Center is the recommended service for managing workforce access to AWS applications and multiple AWS accounts. Both the Console Mobile Application and IAM Identity Center are available to you at no additional cost. Visit the product page for more information about the Console Mobile Application. To learn more about the AWS access portal and its capabilities, see the IAM Identity Center user guide . ?
Assign billing of your shared Amazon EC2 On-Demand Capacity Reservations
Published Date: 2024-10-14 18:55:00
Starting today, you can assign the billing of unused Amazon EC2 On-Demand Capacity Reservations (ODCR) to any one of your organization accounts with which the reservation is shared. Capacity Reservations help you reserve compute capacity for any duration and share it across multiple accounts, enabling you to centrally pool and manage your reserved capacity. When a Capacity Reservation is shared, each account is billed for their respective usage of the reservation, while any unused capacity is by default billed to the account that owns the reservation. Now, depending on your business needs, you have the flexibility to configure which account gets billed for the unused capacity. To get started, you can select any shared reservation and initiate a request to assign its billing to a specific AWS account. Once the request is accepted by the new account, charges for any unused capacity (i.e. the available capacity in your reservation) from that point onward will be billed to the assigned account. As is the case today, billing for any instances running inside the Capacity Reservation is assigned to the respective accounts that launched the instances. This feature is now available to all Capacity Reservations customers in all commercial AWS Regions and AWS China regions at no additional cost. You can access this feature via AWS Management Console, AWS SDKs, or AWS Command Line Interface (CLI). Click here to learn more about this feature .
Amazon Verified Permissions is now HIPAA eligible
Published Date: 2024-10-14 18:00:00
Amazon Verified Permissions is now a Health Insurance Portability and Accountability Act (HIPAA)?eligible service, enabling healthcare and life sciences organizations subject to HIPAA to use the service for permissions management. Amazon Verified Permissions is a permissions management and fine-grained authorization service for the applications that you build. Amazon Verified Permissions uses the Cedar policy language to enable developers and admins to define policy-based access controls using roles and attributes. For example, an patient management application might call Amazon Verified Permissions (AVP) to determine if Alice is permitted access to Bob’s patient records, given that she is in the doctors group and is Bob’s doctor. If you have a HIPAA Business Associate Addendum (BAA) in place with AWS, you can now use Amazon Verified Permissions for workloads that are subject to HIPAA compliance. If you are building applications on API Gateway, you can get started with Amazon Verified Permissions with a just few clicks. Connect to your identity provider and configure permissions that protect API's based on user groups and attributes. If you don't have a BAA in place with AWS, or if you have any other questions about running HIPAA-regulated workloads on AWS, please contact us . You can find AWS HIPPA eligible services on the HIPAA Eligible Services Reference page. For more information on the service visit Fine-Grained Authorization - Amazon Verified Permissions - AWS ?
AWS Transfer Family SFTP connectors now provide real-time status of file transfer operations
Published Date: 2024-10-14 17:00:00
AWS Transfer Family now provides real-time status of file transfers initiated using SFTP connectors. With this capability, you can easily monitor the current state of your file transfer operations and orchestrate post-transfer actions to automate your Managed File Transfer (MFT) workflows in AWS. SFTP connectors provide a fully managed, low-code capability to transfer files between remote SFTP servers and Amazon S3. Now you can query the status of your file transfer operations on demand, such as which file transfers are completed, in-progress, queued, or failed. You can use this capability to orchestrate post-transfer actions based on file status, such as sending status notifications, triggering downstream processing of the files that have transferred successfully, or initiating retries for any failures. For example, when using AWS Step Functions to orchestrate file transfer workflows, you can now recursively poll the status of a requested file transfer operation using SFTP connectors and automatically initiate post processing steps once a file transfer completes. Support for querying file transfer status for SFTP connectors is available in all AWS Regions where Transfer Family is available . For pricing information, visit the Transfer Family pricing page . To get started with querying the status of your file transfers with SFTP connectors, use the ListFileTransferResults API command and visit the Transfer Family User Guide .
Announcing Amazon Q in AWS Supply Chain
Published Date: 2024-10-14 17:00:00
Announcing Amazon Q in AWS Supply Chain, an interactive generative artificial intelligence (GenAI) assistant that helps you operate your supply chain more efficiently by analyzing the data in your AWS Supply Chain Data Lake, providing important operational and financial insights, and answering urgent supply chain questions. It reduces the time users spend searching for relevant information, simplifies the process of finding answers, and minimizes the time spent to learn, deploy, configure, or troubleshoot AWS Supply Chain. Amazon Q in AWS Supply Chain will enable supply chain users to deep dive into their supply chain by asking questions and getting answers based on their data, without having to wait on business intelligence engineers (BIEs). For example, a user could ask, “What is my demand forecast over the next 2 months for apples in Austin?” and Amazon Q will analyze the underlying data to provide the forecast numbers along with an explanation of the analysis. Amazon Q ensures that users access content securely with their existing AWS account credentials according to their permissions, and enterprise-level access controls. This feature is available to customers in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Europe (Frankfurt), and Europe (Ireland) regions. Please visit AWS Supply Chain to learn more and get started. ?
Amazon Redshift now supports refresh interval in a zero-ETL integration
Published Date: 2024-10-14 17:00:00
Amazon Redshift now supports the 'refresh interval' feature for zero-ETL integration, allowing you to control the frequency of data replication into Amazon Redshift. When you specify a non-zero refresh interval to your integration, the ongoing replication process will only start after the specified interval has elapsed. Amazon Redshift's zero-ETL integrations enable you to break down data silos in your organization and run near real-time analytics and machine learning (ML) on the data from your operational databases. Now, with the launch of refresh interval, you have the flexibility to control the frequency of data replication to Amazon Redshift. Existing integrations will continue to have a zero refresh interval, but you can modify them depending on your data latency requirements. To learn more and get started with zero-ETL integration, visit the getting started guides for Amazon Redshift . To learn more about how to use refresh interval, see the documentation . ?
Amazon EKS now supports using NVIDIA and AWS Neuron accelerated instance types with AL2023
Published Date: 2024-10-14 17:00:00
Today, AWS announces the general availability of Amazon Elastic Kubernetes Service (EKS) optimized accelerated AMIs for Amazon Linux 2023 (AL2023). EKS customers can now enjoy the improved security features, optimized boot times, and newer kernel versions of AL2023 for their workloads using NVIDIA GPU, AWS Inferentia, and AWS Trainium instances. These new AMIs are based on the Amazon EKS optimized AMI for AL2023 and include support for NVIDIA and AWS Neuron workloads. The EKS Optimized NVIDIA AMI includes NVIDIA drivers, NVIDIA Fabric Manager and NVIDIA container toolkit, and the EKS Optimized Neuron AMI includes the Neuron driver. Both AMIs also come with the software needed to use AWS Elastic Fabric Adapter (EFA) network interfaces. Customers can choose either the NVIDIA or Neuron AMI with EKS Managed Node Groups, self-managed nodes, and Karpenter. EKS optimized accelerated AMIs for Amazon Linux 2023 are generally available in all AWS Regions including the AWS GovCloud (US) Regions across all standard supported EKS versions as well as extended support versions 1.23 and higher. To learn more about using Amazon Linux 2023 for your accelerated workloads on Amazon EKS, see Amazon EKS optimized accelerated Amazon Linux AMIs and building your own custom Amazon Linux AMIs in the Amazon EKS documentation. ?
Amazon EC2 D3en instances are now available in Asia Pacific (Sydney) region
Published Date: 2024-10-14 17:00:00
Starting today, Amazon EC2 D3en instances, the latest generation of the dense HDD-storage instances, are available in the Asia Pacific (Sydney) region. D3en instances are ideal for workloads including distributed/clustered file systems, big data and analytics, and high capacity data lakes. With D3en instances, you can easily migrate from previous-generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads. Amazon EC2 D3en instances are built on the AWS Nitro System, a collection of AWS-designed hardware and software innovations that enable the delivery of private networking, and efficient, flexible, and secure cloud services with isolated multi-tenancy. D3en instances offer up to 336 TB of local HDD storage. These instances also offer up to 75 Gbps of network bandwidth, and up to 7 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). To get started with D3en instances, visit the AWS Management Console , AWS Command Line Interface (CLI) , or AWS SDKs . To learn more, visit the EC2 D3en instances page. ?