Week 25 (17 Jun - 23 Jun)

Week 25 (17 Jun - 23 Jun)

Amazon S3 Replication Time Control is now available in the AWS GovCloud (US) Regions

Published Date: 2024-06-21 20:15:00

Amazon S3 Replication Time Control (S3 RTC), a feature of S3 Replication that provides a predictable replication time backed by a Service Level Agreement (SLA), is now available in the AWS GovCloud (US) Regions. Customers use S3 Replication to replicate billions of objects across buckets to the same or different AWS Regions and to one or more destination buckets. S3 RTC is designed to replicate 99.99% of objects within 15 minutes after upload, with the majority of those new objects replicated in seconds. S3 RTC is backed by an SLA with a commitment to replicate 99.9% of objects within 15 minutes during any billing month. With S3 RTC enabled, customers can also view S3 Replication metrics (via CloudWatch) by default to monitor the time taken to complete replication, the total number and size of objects that are pending replication, as well as the number of objects that failed to replicate per minute due to misconfiguration or permission errors.

AWS Billing and Cost Management now provides Data Exports for Cost Optimization Hub

Published Date: 2024-06-21 17:00:00

Data Exports for Cost Optimization Hub now enables customers to export their cost optimization recommendations to Amazon S3. Cost Optimization Hub recommendations are consolidated from over 15 types of AWS cost optimization recommendations, such as EC2 instance rightsizing, Graviton migration, and Savings Plan purchases across their AWS accounts and AWS Regions. Exports are delivered on a daily basis to Amazon S3 in Parquet or CSV format. With Data Exports for Cost Optimization Hub, customers receive their recommendations in easy-to-ingest data files, which simplifies creating reports or dashboards. Customers can apply the same filters and preferences to their exports that they use in Cost Optimization Hub to deduplicate savings. Customers can also control the data included in their export using basic SQL column selections and row filters. Data Exports for Cost Optimization Hub makes it easy for customers to bring their recommendation data into analytics and BI tools for tracking, prioritizing, and sharing savings opportunities with key stakeholders. Data Exports for Cost Optimization Hub is available in the US East (N. Virginia) Region, but include recommendations for all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions.

Learn more about Data Exports for Cost Optimization Hub?exports in the Data Exports User Guide ?and in the Data Exports product details page .?You can also learn more about Cost Optimization Hub in the Cost Optimization Hub User Guide .?Get started by visiting the “Data Exports” or “Cost Optimization Hub” features in the AWS Billing and Cost Management console and creating an export of the “Cost Optimization Recommendations” table.

Amazon EC2 macOS AMIs are now available on AWS Systems Manager Parameter Store

Published Date: 2024-06-21 17:00:00

Starting today, customers can reference the latest macOS AMIs via public parameters on the AWS Systems Manager Parameter Store. With this functionality, customers can query the public parameters to retrieve the latest macOS imageIDs, ensure that new EC2 Mac instances are launched with the latest macOS versions, and display a complete list of all available public parameter macOS AMIs. Public parameters are available for both x86 and ARM64 macOS AMIs and can be integrated with customers’ existing AWS CloudFormation templates. This capability is supported in all AWS regions where EC2 Mac instances are available. To learn more about this feature, please visit the documentation here . To learn more about EC2 Mac instances, click here . ?

Amazon SageMaker JumpStart now provides granular access control for foundation models

Published Date: 2024-06-21 17:00:00

Starting today, enterprise admins using Amazon SageMaker JumpStart can easily configure granular access control for foundation models (FM) that are discoverable and accessible to users within their organization. Amazon SageMaker JumpStart is a machine learning (ML) hub that offers pretrained models and built-in algorithms to help you quickly get started with ML. Amazon SageMaker JumpStart provides access to hundreds of FMs, however many enterprise admins want more control over the FMs that can be discovered and used by users within their organization (e.g., only allowing Apache 2.0 license models to be discovered). With this new feature, enterprise admins can now create private hubs in SageMaker JumpStart through the SageMaker SDK, and add specific FMs into private hubs that can be accessible to users within their organization. Enterprise admins can also set up multiple private hubs that are tailored for different roles or accounts with a different set of models.?Once set, users will be able to view hubs and models they are allowed to view and use through SageMaker Studio and the SageMaker SDK. Granular control of FMs in SageMaker JumpStart can be used initially in US-East (Ohio) starting today. To learn more, see the blog and product page .

AWS Lambda now supports IPv6 for outbound connections in VPC in the AWS GovCloud (US) Regions

Published Date: 2024-06-21 17:00:00

AWS Lambda now allows Lambda functions to access resources in dual-stack VPC (outbound connections) over IPv6 in the AWS GovCloud(US) Regions. With this launch, Lambda enables you to scale your application without being constrained by the limited number of IPv4 addresses in your VPC, and to reduce costs by minimizing the need for translation mechanisms. Previously, Lambda functions configured with an IPv4-only or dual-stack VPC could access VPC resources only over IPv4. To work around the constrained number of IPv4 addresses in VPC, customers modernizing their applications were required to build complex architectures or use network translation mechanisms. With today’s launch, Lambda functions can access resources in dual-stack VPC over IPv6 and get virtually unlimited scale, using a simple function level switch. You can also enable VPC-configured Lambda functions to access the internet using egress-only internet gateway . Lambda’s IPv6 support for outbound connections in VPC is generally available in the AWS GovCloud (US-West, US-East) Regions. You can enable outbound access for new or existing Lambda functions to dual-stack VPC resources over IPv6 using the AWS Lambda API, AWS Management Console, AWS Command Line Interface (AWS CLI), AWS CloudFormation, AWS Serverless Application Model (AWS SAM), and AWS SDK. For more information on how to enable IPv6 access for Lambda functions in dual-stack VPC, see the Lambda documentation . To learn more about Lambda, see the Lambda developer guide . ?

AWS Billing and Cost Management now provides Data Exports for FOCUS 1.0 (Preview)

Published Date: 2024-06-21 17:00:00

Data Exports for FOCUS 1.0 now enables customers to export their cost and usage data with the FOCUS 1.0 schema to Amazon S3. This feature is in preview. FOCUS is a new open-source cloud billing data specification that provides standardization to simplify cloud financial management across multiple sources. Data Exports for FOCUS 1.0 includes several AWS-specific columns, such as usage types and cost categories, and delivers exports on a daily basis to Amazon S3 as Parquet or CSV files. With Data Exports for FOCUS 1.0, customers receive their costs in four standardized columns, ListCost, ContractedCost, BilledCost, and EffectiveCost. It provides a consistent treatment of discounts and amortization of Savings Plans and Reserved Instances. The standardized schema of FOCUS ensures each type of billing data appears in a consistent column with a common set of values, so data can be reliably referenced across sources. Data Exports for FOCUS 1.0 is available in preview in the US East (N. Virginia) Region, but include cost and usage data covering all AWS Regions, except AWS GovCloud (US) Regions and AWS China (Beijing and Ningxia) Regions. Learn more about AWS Data Exports for FOCUS 1.0 in the User Guide , product details page , and at the FOCUS open-source project webpage . Get started by visiting the Data Exports page in the AWS Billing and Cost Management console and creating an export of the “FOCUS 1.0 with AWS columns - preview” table.

Amazon Redshift Query Editor V2 is now available in AWS Canada (Calgary) region

Published Date: 2024-06-21 17:00:00

You can now use the Amazon Redshift Query Editor V2 with Amazon Redshift in the AWS Canada (Calgary) region. Amazon Redshift Query Editor V2 makes data in your Amazon Redshift data warehouse and data lake more accessible with a web-based tool for SQL users such as data analysts, data scientists, and database developers. With Query Editor V2, users can explore, analyze, and collaborate on data. It reduces the operational costs of managing query tools by providing a web-based application that allows you to focus on exploring your data without managing your infrastructure.

Default Role in CodeCatalyst Environments

Published Date: 2024-06-21 17:00:00

Today Amazon CodeCatalyst announces support for adding a default IAM role to an environment. Previously - when a workflow was configured - a user was required to specify the environment, AWS account connection, and IAM role for each individual action in order for that action to interact with AWS resources. With the default IAM role, a user only needs to set the environment for an action and the AWS account connection and role are automatically applied to the action. This feature is available in all AWS Regions where CodeCatalyst is generally available. To get started, add a default role to your environments today. To learn more, see the environments section in the CodeCatalyst documentation. ?

Amazon SageMaker HyperPod now supports configurable cluster storage

Published Date: 2024-06-20 17:00:00

Today, AWS announces the general availability of configurable cluster storage for SageMaker HyperPod cluster instances, which enables customers to provision additional storage for model development. This launch allows you to centrally automate the provisioning and management of additional Elastic Block Store (EBS) volumes for your cluster instances. With configurable cluster storage, you can easily integrate additional storage capacity across all your cluster instances, empowering you to customize your persistent cluster environment to meet the unique demands of your distributed training workloads. Cluster storage on SageMaker HyperPod enables customers to dynamically allocate and manage storage resources within the cluster. Organizations can now scale their storage capacity on-demand, ensuring they have sufficient space for Docker images, logs, and custom software installations. This feature is particularly beneficial for foundation model developers working with extensive logging requirements and resource-intensive machine learning models, allowing them to effectively manage and store critical assets within a secure and scalable environment.

Amazon Chime SDK meetings is now available in the Africa (Cape Town) Region

Published Date: 2024-06-20 17:00:00

Amazon Chime SDK now offers WebRTC meetings with API endpoints in the Africa (Cape Town) Region. With this release, Amazon Chime SDK developers can add one-to-one and group meetings with real-time audio and video to web and mobile applications from the Africa (Cape Town) Region. This release also includes the ability to connect clients to audio and video media hosted in the Africa (Cape Town) Region. When creating meetings applications with Amazon Chime SDK, developers call API endpoints to create, update, and delete one-to-one and group meetings. The region selected for the API endpoint can impact the latency for API calls and helps control the location of meeting data, since the region is also where meeting events are received and processed. Developers using the Africa (Cape Town) Region API endpoints to create and manage Amazon Chime SDK meetings must also use the same AWS region for media because Africa (Cape Town) is an opt-in region. Developers using any of the other available control regions to create and manage their meetings can also opt-in so they can host the media for the meeting in the Africa (Cape Town) Region. Developers can opt-in to use the Africa (Cape Town) Region by opting-in through their AWS account .

Record individual participants with Amazon IVS Real-Time Streaming

Published Date: 2024-06-20 17:00:00

Amazon Interactive Video Service (Amazon IVS) Real-Time Streaming enables you to build real-time interactive video experiences. With individual participant recording, you can now record each live stream participant’s video or audio to Amazon Simple Storage Service (Amazon S3). When recording is enabled, each participant is automatically recorded and saved as a separate file in the Amazon S3 bucket you select. This new individual recording option is in addition to the existing composite recording feature, which combines all participants into one media file. There is no additional cost for enabling individual participant recording, but standard Amazon S3 storage and request costs apply. Amazon IVS is a managed live streaming solution that is designed to be quick and easy to set up, and ideal for creating interactive video experiences. Video ingest and delivery are available around the world over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available. To get started, see the following resources:

Amazon RDS for SQL Server supports up to 64TiB and 256,000 IOPS with io2 Block Express volumes

Published Date: 2024-06-20 17:00:00

Amazon RDS for SQL Server now offers enhanced storage and performance capabilities, supporting up to 64TiB of storage and 256,000 I/O operations per seconds (IOPS) with io2 Block Express volumes. This represents an improvement from the previous limit of 16 TiB and 64,000 IOPS with IO2 Block Express. These enhancements enable transactional databases and data warehouses to handle larger workloads on a single Amazon RDS for SQL Server database instance, eliminating the need to shard data across multiple instances. The support for 64TiB and 256,000 IOPS with io2 Block Express for Amazon RDS for SQL Server is now generally available in all AWS regions where Amazon RDS io2 Block Express volumes are currently supported. To learn more, please visit the Amazon RDS User's Guide .

Anthropic's Claude 3.5 Sonnet model now available in Amazon Bedrock

Published Date: 2024-06-20 17:00:00

Anthropic’s Claude 3.5 Sonnet foundation model is now generally available in Amazon Bedrock . Anthropic’s most intelligent model to date, Claude 3.5 Sonnet, sets a new industry standard for intelligence. The model outperforms other generative AI models in the industry as well as Anthropic’s previously most intelligent model, Claude 3 Opus, on a wide range of evaluations, all while being one-?fth of the cost of Opus. You can now get intelligence better than Claude 3 Opus, at the same cost of Anthropic’s original Claude 3 Sonnet model. The frontier intelligence displayed by Claude 3.5 Sonnet combined with cost-e?ective pricing, makes this model ideal for complex tasks such as context-sensitive customer support, orchestrating multi-step work?ows, streamlining code translations, and creating user-facing applications. Claude 3.5 Sonnet exhibits marked improvements in near-human levels of comprehension and fluency. The model represents a significant leap in understanding nuance, humor, and complex instructions. It is exceptional at writing high-quality content that feels more authentic with a natural and relatable tone. Claude 3.5 Sonnet is also Anthropic’s strongest vision model, providing best-in-class vision capabilities. It can accurately interpret charts and graphs and transcribe text from imperfect images—a core capability for retail, logistics, and financial services, where AI may glean more insights from an image, graphic, or illustration than from text alone. Additionally, when instructed and provided with the relevant tools, Claude 3.5 Sonnet can independently write and edit code with sophisticated reasoning and troubleshooting capabilities.

Amazon RDS for Oracle now supports Oracle Multitenant in the AWS GovCloud (US) Regions

Published Date: 2024-06-20 17:00:00

Amazon Relational Database Service (Amazon RDS) for Oracle now supports the Oracle Multitenant configuration on Oracle Database versions 19c and 21c running Oracle Enterprise Edition or Standard Edition 2 in the AWS GovCloud (US) Regions. With this release, the Amazon RDS for Oracle DB instance can operate as a multitenant container database (CDB) hosting one or more pluggable databases (PDBs). A PDB is a set of schemas, schema objects, and non-schema objects that logically appears to a client as a non-CDB. With Oracle Multitenant, you have the option to consolidate standalone databases by either creating them as PDBs or migrating them to PDBs. Database consolidation can deliver improved resource utilization for DB instances, reduced administrative load, and potential reduction in total license requirements. To create a multitenant Amazon RDS for Oracle DB instance, simply create an Oracle DB instance in the AWS Management Console or using the AWS CLI , and specify the Oracle multitenant architecture and multitenant configuration. You may also convert an existing non-CDB instance to the CDB architecture , and then modify the instance to the multitenant configuration to enable it to hold multiple PDBs. Amazon RDS for Oracle DB instances are charged at the same rate whether the instance is a non-CDB or a CDB in either the single-tenant or multi-tenant configuration. Amazon RDS for Oracle allows you to set up, operate, and scale Oracle database deployments in the cloud. See Amazon RDS for Oracle Pricing for up-to-date pricing and regional availability. ?

Amazon Bedrock now supports compressed embeddings from Cohere Embed

Published Date: 2024-06-20 17:00:00

Amazon Bedrock now supports compressed embeddings (int8 and binary) from the Cohere Embed model, enabling developers and businesses to build more efficient generative AI applications without compromising on performance. Cohere Embed is a leading text embedding model. It is most frequently used to power Retrieval-Augmented Generation (RAG) and semantic search systems. The text embeddings output by the Cohere Embed model must be stored in a database with vector search capabilities, with storage costs being directly related to the dimensions of the embedding output as well as the number format precision. Cohere’s compression-aware model training techniques allows the model to output embeddings in binary and int8 precision format, which are significantly smaller in size than the often used FP32 precision format, with minimal accuracy degradation. This unlocks the ability to run your enterprise search applications faster, cheaper, and more efficiently. int8 and binary embeddings are especially interesting for large, multi-tenancy setups, where the ability to search millions of embeddings within milliseconds is a critical business advantage. Cohere’s compressed embeddings allow you to build applications which are efficient enough to put into production at scale, accelerating your AI strategy to support your employees and customers. Cohere Embed int8 and binary embeddings are now available in Amazon Bedrock in all the AWS Regions where the Cohere Embed model is available . To learn more, read the Cohere in Amazon Bedrock product page , documentation , and Cohere launch blog . To get started with Cohere models in Amazon Bedrock, visit the Amazon Bedrock console .

AWS CodeArtifact now supports Cargo, the Rust package manager

Published Date: 2024-06-20 17:00:00

Today, AWS announces the general availability of Cargo support in CodeArtifact. Crates, which are used to distribute Rust libraries, can now be stored in CodeArtifact. Cargo, the package manager for the Rust programming language, can be used to publish and download crates from CodeArtifact repositories. Developers can configure CodeArtifact to fetch crates from crates.io , the Rust community’s crate hosting service. When Cargo is connected to a CodeArtifact repository, CodeArtifact will automatically fetch requested crates from crates.io and store them in the CodeArtifact repository. By storing both private first-party crates and public, third-party crates in CodeArtifact, developers can access their critical application dependencies from a single source. CodeArtifact support for Cargo is available in all 13 CodeArtifact regions. To learn more, see AWS CodeArtifact . ?

AWS Compute Optimizer supports rightsizing recommendations for Amazon RDS MySQL and RDS PostgreSQL

Published Date: 2024-06-20 17:00:00

AWS Compute Optimizer now provides recommendations for Amazon RDS MySQL and RDS PostgreSQL DB instances and storage. These recommendations help you identify idle databases and choose the optimal DB instance class and provisioned IOPS settings, so you can reduce costs for over-provisioned workloads and increase the performance of under-provisioned workloads. AWS Compute Optimizer automatically discovers your Amazon RDS MySQL and RDS PostgreSQL DB instances and analyzes Amazon CloudWatch metrics such as CPU utilization, read and write IOPS, and database connections to generate recommendations. If you enable Amazon RDS Performance Insights on your DB instances, Compute Optimizer will analyze additional metrics such as DBLoad to give you more insights to choose the optimal DB instance configurations. With these metrics, Compute Optimizer delivers idle and rightsizing recommendations to help you optimize your RDS DB instances. This new feature is available in all AWS Regions where AWS Compute Optimizer is available except the AWS GovCloud (US) and the China Regions. To learn more about the new feature updates, please visit Compute Optimizer’s product page and user guide . ?

Amazon OpenSearch Service now supports JSON Web Token (JWT) authentication and authorization

Published Date: 2024-06-19 19:15:00

Amazon OpenSearch Service now supports JSON Web Token (JWT) that enables you to authenticate and authorize users without having to provide any credentials or use internal user database . JWT support also makes it easy for customers to integrate with identity provider of their choice and isolate tenants in a multi-tenant application. Until now, Amazon OpenSearch Service allowed customers to implement client and user authentication using Amazon Cognito and basic authentication with the internal user database. With JWT support, customers can now use a single token which any operator or external identity provider can use to authenticate requests to their Amazon OpenSearch Service cluster. Customers can setup JWT authentication using the console or CLI, as well as the create and update domain APIs.

Amazon SageMaker now offers a fully managed MLflow Capability

Published Date: 2024-06-19 17:00:00

Amazon SageMaker now offers a fully managed MLflow capability. Data scientists can use familiar MLflow constructs to organize, track, and analyze ML experiments and administrators can setup MLflow with better scalability, availability, and security. MLflow is a popular open-source tool that helps customers manage ML experiments. Data scientists and ML engineers are already using MLflow with SageMaker. However, it required setting up, managing, and securing access to MLflow Tracking Servers. With this launch, SageMaker makes it easier for customers to set-up, and manage MLflow Tracking Servers with a couple of clicks. Customers can secure access to MLflow via AWS Identity and Access Management roles. Data scientists can use MLflow SDK to track experiments across local notebooks, IDEs, managed IDEs in SageMaker Studio, SageMaker Training Jobs, SageMaker Processing Jobs, and SageMaker Pipelines. Experimentation capabilities such as rich visualizations for run comparisons and model evaluations are available to help data scientists find the best training iteration. Models registered in MLflow automatically appear in the SageMaker Model Registry for a unified model governance experience and customers can deploy MLflow Models to SageMaker Inference without building custom MLflow containers. The integration with SageMaker allows data scientists to easily track metrics during model training ensuring reproducibility across different frameworks and environments.

AWS Glue adds additional 13 new transforms including flag duplicates

Published Date: 2024-06-19 17:00:00

AWS Glue now offers 13 new built-in transforms: Flag duplicates in column, Format Phone Number, Format case, Fill with mode, Flag duplicate rows, Remove duplicates, Month name, Is even, Cryptographic Hash, Decrypt, Encrypt, Int to IP and IP to int. AWS Glue is a serverless data integration service that makes it easy for analytics users to discover, prepare, move, and integrate data from multiple sources. With these new transform, ETL developers can quickly build more sophisticated data pipelines without having to write custom code for these common transform tasks. Each of these new transforms address a unique data processing need. For example, use Remove duplicates, Flag duplicates in column or Flag duplicate rows to highlight or remove the duplicates rows within your dataset, use Cryptographic Hash to apply an algorithm to hash values in the column, encrypt values in the source columns with the Encrypt transform, or decrypt these columns with the Decrypt transform. The new transformations are available for code-based jobs. ?

Announcing support for Autodesk 3ds Max Usage-Based Licensing in AWS Deadline Cloud

Published Date: 2024-06-19 17:00:00

The AWS Deadline Cloud Usage-Based Licensing (UBL) server now offers on-demand licenses for Autodesk 3ds Max, a popular software for 3D modeling, animation, and digital imagery. This addition joins other supported digital content creation tools such as Autodesk Arnold, Autodesk Maya?, Foundry Nuke?, and SideFX? Houdini. With Deadline Cloud UBL, you only pay for use of the software during the processing of jobs. With this release, customers can integrate 3ds Max licensing into their workflows by adding it to their license endpoints. Once configured, 3ds Max license traffic can be routed to the appropriate license endpoint, enabling seamless access and pay-as-you-go usage. This feature is currently available in the Deadline Cloud Customer-Managed fleet deployment option. For more information, please visit the Deadline Cloud product page , and see the Deadline Cloud pricing page for UBL price details. ?

AWS Elemental MediaConnect adds source stream monitoring

Published Date: 2024-06-19 17:00:00

AWS Elemental MediaConnect now provides information about the incoming transport stream and its program media. You can view transport stream information such as program numbers, stream types, codecs, and packet identifiers (PIDs) for video, audio, and data streams in the console or via the MediaConnect API. With this new feature you can more accurately identify and resolve issues, minimizing disruptions to your live broadcasts. To learn more about monitoring source streams, visit the AWS Elemental MediaConnect documentation page . AWS Elemental MediaConnect is a reliable, secure, and flexible transport service for live video that enables broadcasters and content owners to build live video workflows and securely share live content with partners and customers. MediaConnect helps customers transport high-value live video streams into, through, and out of the AWS Cloud. MediaConnect can function as a standalone service or as part of a larger video workflow with other AWS Elemental Media Services , a family of services that form the foundation of cloud-based workflows to transport, transcode, package, and deliver video. Visit the AWS Region Table for a full list of AWS Regions where MediaConnect is available. To learn more about MediaConnect, please visit here . ?

Amazon CodeCatalyst now supports GitHub Cloud and Bitbucket Cloud with Amazon Q

Published Date: 2024-06-19 17:00:00

Amazon CodeCatalyst now supports the use of source code repositories hosted in GitHub Cloud and Bitbucket Cloud with Amazon Q for feature development. Customers can now assign issues in CodeCatalyst to Amazon Q and direct it to work with source code hosted in GitHub Cloud and Bitbucket Cloud. Using Amazon Q, you can go from an issue all the way to merge-ready code in a pull request. Amazon Q analyzes the issue and existing source code, creates a plan, and then generates source code in a pull request. Before, customers could only use source code repositories hosted in CodeCatalyst with this capability. Now, customers can use source code repositories hosted in GitHub Cloud or Bitbucket Cloud.

This capability is available in US West (Oregon). There is no change to pricing.

For more information, see the documentation or visit the Amazon CodeCatalyst website .

CodeCatalyst allows customers to use Amazon Q Developer to choose a blueprint

Published Date: 2024-06-18 17:00:00

Today, AWS announces the general availability of a new capability of Amazon Q Developer in Amazon CodeCatalyst. Customers can now use Amazon Q to help them pick the best blueprint for their needs when getting started with a new project or on an existing project. Before, customers had to read through the descriptions of available blueprints to try and pick the best match. Now customers can describe what they want to create and receive direct guidance about which blueprint to pick for their needs. Amazon Q will also create an issue in the project for each requirement that isn’t included in the resources created by the blueprint. Users can then customize their project by assigning those issues to developers to add that functionality. They can even choose to assign these issues to Amazon Q itself, which will then attempt to create code to solve the problem. Customers can use blueprints to create projects in CodeCatalyst that include resources, such as a source repository with sample code, CI/CD workflows that build and test your code, and integrated issue tracking tools. Customers can now use Amazon Q to help them create projects or add functionality to existing projects with blueprints. If the space has custom blueprints, Amazon Q Developer will learn and include these in its recommendations. For more information, see the documentation or visit the Amazon CodeCatalyst website . This capability is available in regions where CodeCatalyst and Amazon Bedrock are available. There is no change to pricing. ?

AWS Glue Usage Profiles is now generally available

Published Date: 2024-06-18 17:00:00

Today, AWS announces general availability of AWS Glue Usage Profiles, a new cost control capability that allows admins to set preventatives controls and limits over resources consumed by their Glue jobs and Notebook sessions. With AWS Glue Usage Profiles, admins can create different cost profiles for different classes of users. Each profile is a unique set of parameters that can be assigned to different types of users. For example, a cost profile for data engineer working on production pipeline could have unrestricted number of workers whereas the cost profile for a test user could have a restricted number of workers. You can get started by creating a new usage profile with AWS Glue Studio console or by using the Glue Usage Profiles APIs . Next, you assign that profile to an IAM user or role. After following these steps, all new Glue jobs or sessions created with that particular IAM user or role, will have the limits specified in the assigned usage profile.

Amazon MWAA now supports Custom Web Server URLs

Published Date: 2024-06-18 17:00:00

Amazon Managed Workflows for Apache Airflow (MWAA) now supports custom domain names for the Airflow web server, simplifying access to the Airflow user interface. Amazon MWAA is a managed service for Apache Airflow that lets you use the same familiar Apache Airflow platform as you do today to orchestrate your workflows and enjoy improved scalability, availability, and security without the operational burden of having to manage the underlying infrastructure. Amazon MWAA now adds the ability to customize the redirection URL that MWAA’s single sign-on (SSO) uses after authenticating the user against their IAM credentials. This allows customers that use private web servers with load balancers, custom DNS entries, or proxies to point users to a user-friendly web address while maintaining the simplicity of MWAA’s IAM integration. You can launch or upgrade an Apache Airflow environment with a custom URL on Amazon MWAA with just a few clicks in the AWS Management Console in all currently supported Amazon MWAA regions . To learn more about custom domains visit the Amazon MWAA documentation . Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. ?

Amazon EC2 D3 instances are now available in Europe (Paris) region

Published Date: 2024-06-18 17:00:00

Starting today, Amazon EC2 D3 instances, the latest generation of the dense HDD-storage instances, are available in the Europe (Paris) region. Amazon EC2 D3 instances are powered by 2nd generation Intel Xeon Scalable Processors (Cascade Lake) and provide up to 48 TB of local HDD storage. D3 instances are ideal for workloads including distributed / clustered file systems, big data and analytics, and high capacity data lakes. With D3 instances, you can easily migrate from previous-generation D2 instances or on-premises infrastructure to a platform optimized for dense HDD storage workloads. D3 instances are offered in 4 sizes - xlarge, 2xlarge, 4xlarge, and 8xlarge. D3 is available for purchase with Savings Plans, Reserved Instances, Convertible Reserved, On-Demand, and Spot instances, or as Dedicated Instances. To get started with D3 instances, visit the AWS Management Console , AWS Command Line Interface (CLI) , or AWS SDKs . To learn more, visit the EC2 D3 instances page.

Amazon DataZone launches custom blueprint configurations for AWS services

Published Date: 2024-06-18 17:00:00

Amazon DataZone launches custom blueprint configurations for AWS services allowing customers to optimize resource usage and costs by using existing AWS Identity and Access Management (IAM) roles and/or AWS services, such as Amazon S3. Amazon DataZone is a data management service for customers to catalog, discover, share, and govern data at scale across organizational boundaries with governance and access controls. Amazon DataZone’s blueprints can help administrators define which AWS tools and services will be deployed for data producers like data engineers or data consumers like data scientists, simplifying access to data and increasing collaboration among project members. Custom blueprints for AWS services adds to the family of Amazon Datazone blueprints including the data lake, data warehouse, and Amazon SageMaker blueprints. With custom blueprints, administrators can include Amazon DataZone into their data pipelines by using existing IAM roles to publish existing data assets, owned by those roles, to the catalog, thereby establishing governed sharing of those data assets and enhancing governance across the entire infrastructure.

Amazon EC2 C7g and R7g instances are now available in additional regions

Published Date: 2024-06-18 17:00:00

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7g and R7g instances are available are now available in Europe (Milan), Asia Pacific (Hong Kong) and South America (S?o Paulo) Regions. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage. Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).

Amazon EC2 C7g and R7g are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Canada (Central), Asia Pacific (Hyderabad, Hong Kong, Mumbai, Seoul, Singapore, Sydney, Tokyo), China (Beijing, Nginxia), Europe (Frankfurt, Ireland, London, Milan, Spain, Stockholm) and South America (S?o Paulo). To learn more, see Amazon EC2 C7g and R7g . To learn how to migrate your workloads to AWS Graviton-based instances, see the AWS Graviton Fast Start Program .

Amazon Connect Cases is now available in additional Asia Pacific regions

Published Date: 2024-06-18 17:00:00

Amazon Connect Cases is now available in the Asia Pacific (Seoul) and Asia Pacific (Tokyo) AWS regions . Amazon Connect Cases provides built-in case management capabilities that make it easy for your contact center agents to create, collaborate on, and quickly resolve customer issues that require multiple customer conversations and follow-up tasks.

Amazon Redshift Query Editor V2 now supports 100MB file uploads

Published Date: 2024-06-18 17:00:00

Amazon Redshift Query Editor V2 now supports uploading local files up to 100MB in size when loading data into your Amazon Redshift databases. This increased file size limit provides more flexibility for ingesting larger datasets directly from your local environment. With the new 100MB file size limit, data analysts, engineers, and developers can now load larger datasets from local files into their Redshift clusters or workgroups using Query Editor V2. This enhancement is particularly beneficial when working with CSV, JSON, or other structured data files that previously exceeded the 5MB limit. By streamlining the upload process for sizeable local files, you can expedite data ingestion and analysis workflows on Amazon Redshift. To learn more, see the Amazon Redshift documentation . ?

Amazon OpenSearch Serverless now available in South America (Sao Paulo) region

Published Date: 2024-06-18 17:00:00

We are excited to announce the availability of Amazon OpenSearch Serverless in the South America (Sao Paulo) region. OpenSearch Serverless is a serverless deployment option for Amazon OpenSearch Service that makes it simple to run search and analytics workloads without the complexities of infrastructure management. OpenSearch Serverless automatically provisions and scales resources to provide consistently fast data ingestion rates and millisecond response times during changing usage patterns and application demand. With the support in the South America (Sao Paulo) region, OpenSearch Serverless is now available in 12 regions globally: US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe West (Paris), Europe West (London), Asia Pacific South (Mumbai), and South America (Sao Paulo). Please refer to the AWS Regional Services List for more information about Amazon OpenSearch Service availability. To learn more about OpenSearch Serverless, see the documentation. ?

Introducing Maven, Python, and NuGet support in Amazon CodeCatalyst package repositories

Published Date: 2024-06-18 17:00:00

Today, AWS announces the support for Maven, Python, and NuGet package formats in Amazon CodeCatalyst package repositories. CodeCatalyst customers can now securely store, publish, and share Maven, Python, and NuGet packages, using popular package managers such as mvn, pip, nuget and more. Through your CodeCatalyst package repositories, you can also access open source packages from from 6 additional public package registries. Your packages remain available for your development teams, should public packages and registries become unavailable from other service providers.

Amazon Kinesis Video Streams is now available in AWS GovCloud (US) Regions

Published Date: 2024-06-18 17:00:00

Amazon Kinesis Video Streams is now available in AWS GovCloud (US-East and US-West) Regions. Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for storage, analytics, machine learning (ML), playback, and other processing. Amazon Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming video data from millions of devices. It durably stores, encrypts, and indexes video data in your streams, and allows you to access your data through easy-to-use APIs. Kinesis Video Streams enables you to playback video for live and on-demand viewing, and quickly build applications that take advantage of computer vision and video analytics through integration with Amazon Rekognition Video and Amazon Sagemaker.

For more information, please visit the Amazon Kinesis Video Streams product page, and see the AWS region table for complete regional availability information. Note that Amazon Kinesis Video Streams with WebRTC is not yet available in AWS GovCloud (US) Regions. ?

Amazon Redshift announces support for VARBYTE 16MB data type

Published Date: 2024-06-18 17:00:00

Amazon Redshift has extended the VARBYTE data type from the current 1,024,000 bytes maximum size (see the VARBYTE What’s New announcement from December 2021) to 16,777,216 bytes max size. VARBYTE is a variable size data type for storing and representing variable-length binary strings. With this announcement, Amazon Redshift will support all existing VARBYTE functionality with 16MB VARBYTE values. VARBYTE data type can now ingest data larger than 1,024,000 bytes from Parquet, CSV and text file formats. The default size for a VARBYTE(n) column (if n is not specified) remains 64,0000 bytes. VARBYTE 16MB support is now available in all commercial AWS Regions. Refer to the AWS Region Table for Amazon Redshift availability. For more information or to get started with Amazon Redshift VARBYTE data type, see the documentation . ?

Amazon now offers a capability to analyze issues and recommend granular tasks

Published Date: 2024-06-18 17:00:00

Amazon CodeCatalyst now offers a new capability powered by Amazon Q to help customers analyze issues and recommend granular tasks. These tasks can then be individually assigned to users or to Amazon Q itself, helping you accelerate work. Before, customers could create issues to track work that needs to be done on a project and they needed to manually create more granular tasks that can be assigned to others on the team. Now customers can ask Amazon Q to analyze an issue for complexity and suggest ways of breaking up the work into individual tasks. This capability is available in the PDX region. For more information, see the documentation or visit the Amazon CodeCatalyst website . ?

AWS Glue serverless Spark UI now supports rolling log files

Published Date: 2024-06-18 17:00:00

Today, AWS announces rolling log file support for AWS Glue serverless Apache Spark UI. Serverless Spark UI enable you to get detailed information about your AWS Glue Spark jobs. With rolling log support, you can use AWS Glue serverless Spark UI to see detailed information for long-running batch or streaming jobs. Rolling log files enables you to monitor and debug large batch and streaming Glue jobs.

Amazon RDS for MariaDB supports minors 10.11.8, 10.6.18, 10.5.25, 10.4.34

Published Date: 2024-06-17 17:00:00

Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor versions 10.11.8, 10.6.18, 10.5.25, and 10.4.34. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the bug fixes, performance improvements, and new functionality added by the MariaDB community. You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also leverage Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MariaDB instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide. Amazon RDS for MariaDB makes it straightforward to set up, operate, and scale MariaDB deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MariaDB . Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console . ?

AWS Systems Manager now supports additional Rocky, Oracle, and Alma Linux versions

Published Date: 2024-06-17 17:00:00

AWS Systems Manager now supports instances running Rocky Linux, Alma Linux, and Oracle Linux versions 8.8 and 8.9. Systems Manager customers running these operating systems versions now have access to all AWS Systems Manager Node Management capabilities, including Fleet Manager, Compliance, Inventory, Hybrid Activations, Session Manager, Run Command, State Manager, Patch Manager, and Distributor. For a full list of supported operating systems and machine types for AWS Systems Manager, see the user guide . Patch Manager enables you to automatically patch instances with both security-related and other types of updates across your infrastructure for a variety of common operating systems, including Windows Server, Amazon Linux, and Red Hat Enterprise Linux (RHEL). For a full list of supported operating systems for AWS Systems Manager Patch Manager, see the Patch Manager prerequisites user guide page . This feature is available in all AWS Regions where AWS Systems Manager is available. For more information, visit the Systems Manager product page and Systems Manager documentation . ?

Amazon CodeCatalyst now offers the ability to link issues

Published Date: 2024-06-17 17:00:00

Amazon CodeCatalyst now offers the ability to link an issue to other issues. This allows customers to link issues in CodeCatalyst as blocked by, duplicate of, related to, or blocks another issue. Customer use CodeCatalyst issues to organize and coordinate their team's daily work. In addition, customers want to identify and visualize relationships between issues to plan the work effectively. The new capability assists teams to visualize dependencies between issues and see which issue is blocked with with other issue, is a duplicate of another issue, or if an issue blocks others issues.

AWS KMS now supports Elliptic Curve Diffie-Hellman (ECDH) key agreement

Published Date: 2024-06-17 17:00:00

The Elliptic Curve Diffie-Hellman (ECDH) key agreement enables two parties to establish a shared secret over a public channel. With this new feature, you can take another party’s public key and your own elliptic-curve KMS key that’s inside AWS Key Management Service (KMS) to derive a shared secret within the security boundary of FIPS 140-2 validated KMS hardware security module (HSM). This shared secret can then be used to derive a symmetric key to encrypt and decrypt data between the two parties using a symmetric encryption algorithm within your application. You can use this feature directly within your own applications by calling DeriveSharedSecret KMS API, or using the latest version of the AWS Encryption SDK that supports ECDH keyring. The AWS Encryption SDK provides a simple interface for encrypting and decrypting data using a shared secret, automatically handling the key derivation and encryption process for you. In addition, ECDH key agreement can be an important building block for hybrid encryption schemes, or seeding a secret inside remote devices and isolated compute environments like AWS Nitro Enclaves. This new feature is available in all AWS Regions, including the AWS GovCloud (US) Regions. To learn more about this new capability, see DeriveSharedSecret KMS API in the AWS KMS API Reference. ?

AWS CodeBuild now supports organization and global GitHub webhooks

Published Date: 2024-06-17 17:00:00

AWS CodeBuild now supports organization and global webhooks for GitHub and GitHub Enterprise Server. CodeBuild webhooks automatically detect changes in your repositories and trigger new builds whenever webhook events are received. These events include GitHub Actions workflow run, commit push, release, and pull request. With this feature, you can now configure a single CodeBuild webhook at organization or enterprise level to receive events from all repositories in your organizations, instead of creating webhooks for each individual repository. For managed GitHub Action self-hosted runners , this feature provides a centralized control mechanism, as you can set up runner environment at organization or enterprise level and use the same runner across all your repositories. This feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page . To get started, set up organization or global webhooks in CodeBuild projects, and use them to run GitHub Actions workflow jobs or trigger builds upon push or pull request events. To learn more about using managed GitHub Actions self-hosted runners, see CodeBuild’s blog post . ?

Amazon EC2 C7i-flex instances are now available in US East (Ohio) Region

Published Date: 2024-06-17 17:00:00

Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i-flex instances that deliver up to 19% better price performance compared to C6i instances, are available in US East (Ohio) region. C7i-flex instances expand the EC2 Flex instances portfolio to provide the easiest way for you to get price performance benefits for a majority of compute intensive workloads. The new instances are powered by the 4th generation Intel Xeon Scalable custom processors (Sapphire Rapids) that are available only on AWS, and offer 5% lower prices compared to C7i. C7i-flex instances offer the most common sizes, from large to 8xlarge, and are a great first choice for applications that don't fully utilize all compute resources. With C7i-flex instances, you can seamlessly run web and application servers, databases, caches, Apache Kafka, and Elasticsearch, and more. For compute-intensive workloads that need larger instance sizes (up to 192 vCPUs and 384 GiB memory) or continuous high CPU usage, you can leverage C7i instances. C7i-flex instances are available in the following AWS Regions: US East (Ohio), US West (N. California), Europe (Ireland, London, Paris, Spain, Stockholm), Canada (Central), Asia Pacific (Mumbai, Singapore), and South America (S?o Paulo). To learn more, visit Amazon EC2 C7i-flex instances . ?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了