Week 49 (2 Dec - 8 Dec)

Week 49 (2 Dec - 8 Dec)

Amazon EC2 Hpc6id instances are now available in Europe (Paris) region

Published Date: 2024-12-06 22:30:00

Starting today, Amazon EC2 Hpc6id instances are available in additional AWS Region Europe (Paris). These instances are optimized to efficiently run memory bandwidth-bound, data-intensive high performance computing (HPC) workloads, such as finite element analysis and seismic reservoir simulations. With EC2 Hpc6id instances, you can lower the cost of your HPC workloads while taking advantage of the elasticity and scalability of AWS. EC2 Hpc6id instances are powered by 64 cores of 3rd Generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.5 GHz, 1,024 GB of memory, and up to 15.2 TB of local NVMe solid state drive (SSD) storage. EC2 Hpc6id instances, built on the AWS Nitro System, offer 200 Gbps Elastic Fabric Adapter (EFA) networking for high-throughput inter-node communications that enable your HPC workloads to run at scale. The AWS Nitro System is a rich collection of building blocks that offloads many of the traditional virtualization functions to dedicated hardware and software. It delivers high performance, high availability, and high security while reducing virtualization overhead. To learn more about EC2 Hpc6id instances, see the product detail page.

Amazon EC2 Hpc7a instances are now available in Europe (Paris) region

Published Date: 2024-12-06 22:30:00

Starting today, Amazon EC2 Hpc7a instances are available in additional AWS Region Europe (Paris). EC2 Hpc7a instances are powered by 4th generation AMD EPYC processors with up to 192 cores, and 300 Gbps of Elastic Fabric Adapter (EFA) network bandwidth for fast and low-latency internode communications. Hpc7a instances feature Double Data Rate 5 (DDR5) memory, which enables high-speed access to data in memory. Hpc7a instances are ideal for compute-intensive, tightly coupled, latency-sensitive high performance computing (HPC) workloads, such as computational fluid dynamics (CFD), weather forecasting, and multiphysics simulations, helping you scale more efficiently on fewer nodes. To optimize HPC instances networking for tightly coupled workloads, you can access these instances in a single Availability Zone within a Region. To learn more, see Amazon Hpc7a instances.

Amazon Aurora now available as a quick create vector store in Amazon Bedrock Knowledge Bases

Published Date: 2024-12-06 21:10:00

Amazon Aurora PostgreSQL is now available as a quick create vector store in Amazon Bedrock Knowledge Bases. With the new Aurora quick create option, developers and data scientists building generative AI applications can select Aurora PostgreSQL as their vector store with one click to deploy an Aurora Serverless cluster preconfigured with pgvector in minutes. Aurora Serverless is an on-demand, autoscaling configuration where capacity is adjusted automatically based on application demand, making it ideal as a developer vector store. Knowledge Bases securely connects foundation models (FMs) running in Bedrock to your company data sources for Retrieval Augmented Generation (RAG) to deliver more relevant, context-specific, and accurate responses that make your FM more knowledgeable about your business. To implement RAG, organizations must convert data into embeddings (vectors) and store these embeddings in a vector store for similarity search in generative artificial intelligence (AI) applications. Aurora PostgreSQL, with the pgvector extension, has been supported as a vector store in Knowledge Bases for existing Aurora databases. With the new quick create integration with Knowledge Bases, Aurora is now easier to set up as a vector store for use with Bedrock. The quick create option in Bedrock Knowledge Bases is available in these regions with the exception of AWS GovCloud (US-West) which is planned for Q4 2024. To learn more about RAG with Amazon Bedrock and Aurora, see Amazon Bedrock Knowledge Bases. Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. To get started using Amazon Aurora PostgreSQL as a vector store for Amazon Bedrock Knowledge Bases, take a look at our documentation.

SageMaker SDK enhances training and inference workflows

Published Date: 2024-12-06 18:00:00

Today, we are introducing the new ModelTrainer class and enhancing the ModelBuilder class in the SageMaker Python SDK. These updates streamline training workflows and simplify inference deployments. The ModelTrainer class enables customers to easily set up and customize distributed training strategies on Amazon SageMaker. This new feature accelerates model training times, optimizes resource utilization, and reduces costs through efficient parallel processing. Customers can smoothly transition their custom entry points and containers from a local environment to SageMaker, eliminating the need to manage infrastructure. ModelTrainer simplifies configuration by reducing parameters to just a few core variables and providing user-friendly classes for intuitive SageMaker service interactions. Additionally, with the enhanced ModelBuilder class, customers can now easily deploy HuggingFace models, switch between developing in local environment to SageMaker, and customize their inference using their pre- and post-processing scripts. Importantly, customers can now pass the trained model artifacts from ModelTrainer class easily to ModelBuilder class, enabling a seamlessly transition from training to inference on SageMaker. You can learn more about ModelTrainer class here, ModelBuilder enhancements here, and get started using ModelTrainer and ModelBuilder sample notebooks.

Amazon SageMaker introduces new capabilities to accelerate scaling of Generative AI Inference

Published Date: 2024-12-06 18:00:00

We are excited to announce two new capabilities in SageMaker Inference that significantly enhance the deployment and scaling of generative AI models: Container Caching and Fast Model Loader. These innovations address critical challenges in scaling large language models (LLMs) efficiently, enabling faster response times to traffic spikes and more cost-effective scaling. By reducing model loading times and accelerating autoscaling, these features allow customers to improve the responsiveness of their generative AI applications as demand fluctuates, particularly benefiting services with dynamic traffic patterns. Container Caching dramatically reduces the time required to scale generative AI models for inference by pre-caching container images. This eliminates the need to download them when scaling up, resulting in significant reduction in scaling time for generative AI model endpoints. Fast Model Loader streams model weights directly from Amazon S3 to the accelerator, loading models much faster compared to traditional methods. These capabilities allow customers to create more responsive auto-scaling policies, enabling SageMaker to add new instances or model copies quickly when defined thresholds are reached, thus maintaining optimal performance during traffic spikes while at the same time managing costs effectively. These new capabilities are accessible in all AWS regions where Amazon SageMaker Inference is available. To learn more see our documentation for detailed implementation guidance. ?

Amazon CloudWatch now provides centralized visibility into telemetry configurations

Published Date: 2024-12-06 18:00:00

Amazon CloudWatch now offers centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. This enhanced visibility enables central DevOps teams, system administrators, and service teams to identify potential gaps in their infrastructure monitoring setup. The telemetry configuration auditing experience seamlessly integrates with AWS Config to discover AWS resources, and can be turned on for the entire organization using the new AWS Organizations integration with Amazon CloudWatch. With visibility into telemetry configurations, you can identify monitoring gaps that might have been missed in your current setup. For example, this helps you identify gaps in your EC2 detailed metrics so that you can address them and easily detect short-lived performance spikes and build responsive auto-scaling policies. You can audit telemetry configuration coverage at both resource type and individual resource levels, refining the view by filtering across specific accounts, resource types, or resource tags to focus on critical resources. The telemetry configurations auditing experience is available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm) regions. There is no additional cost to turn on the new experience, including for AWS Config. You can get started with auditing your telemetry configurations using the Amazon CloudWatch Console, by clicking on Telemetry config in the navigation panel, or programmatically using the API/CLI. To learn more, visit our documentation.

AWS Config now supports a service-linked recorder

Published Date: 2024-12-06 18:00:00

AWS Config added support for a service-linked recorder, a new type of AWS Config recorder that is managed by an AWS service and can record configuration data on service-specific resources, such as the new Amazon CloudWatch telemetry configurations audit. By enabling the service-linked recorder in Amazon CloudWatch, you gain centralized visibility into critical AWS service telemetry configurations, such as Amazon VPC Flow Logs, Amazon EC2 Detailed Metrics, and AWS Lambda Traces. With service-linked recorders, an AWS service can deploy and manage an AWS Config recorder on your behalf to discover resources and utilize the configuration data to provide differentiated features. For example, an Amazon CloudWatch managed service-linked recorder helps you identify monitoring gaps within specific critical resources within your organization, providing a centralized, single-pane view of telemetry configuration status. Service-linked recorders are immutable to ensure consistency, prevention of configuration drift, and simplified experience. Service-linked recorders operate independently of any existing AWS Config recorder, if one is enabled. This allows you to independently manage your AWS Config recorder for your specific use cases while authorized AWS services can manage the service-linked recorder for feature specific requirements. Amazon CloudWatch managed service-linked recorder is now available in US East (N. Virginia), US West (Oregon), US East (Ohio), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Sydney) Europe (Frankfurt), Europe (Ireland), Europe (Stockholm) regions. The AWS Config service-linked recorder specific to Amazon CloudWatch telemetry configuration feature is available to customers at no additional cost. To learn more, please refer to our documentation. ?

Amazon RDS Performance Insights extends On-demand Analysis to new regions

Published Date: 2024-12-06 18:00:00

Amazon RDS (Relational Database Service) Performance Insights expands the availability of its on-demand analysis experience to 15 new regions. This feature is available for Aurora MySQL, Aurora PostgreSQL, and RDS for PostgreSQL engines. This on-demand analysis experience, which was previously available in only 15 regions, is now available in all commercial regions. This feature allows you to analyze Performance Insights data for a time period of your choice. You can learn how the selected time period differs from normal, what went wrong, and get advice on corrective actions. Through simple-to-understand graphs and explanations, you can identify the chief contributors to performance issues. You will also get the guidance on the next steps to act on these issues. This can reduce the mean-time-to-diagnosis for database performance issues from hours to minutes. Amazon RDS Performance Insights is a database performance tuning and monitoring feature of RDS that allows you to visually assess the load on your database and determine when and where to take action. With one click in the Amazon RDS Management Console, you can add a fully-managed performance monitoring solution to your Amazon RDS database. To learn more about RDS Performance Insights, read the Amazon RDS User Guide and visit Performance Insights pricing for pricing details and region availability. ?

Amazon Bedrock Knowledge Bases now supports GraphRAG (preview)

Published Date: 2024-12-04 18:00:00

Today, we are announcing the support of GraphRAG, a new capability in Amazon Bedrock Knowledge Bases that enhances Generative AI applications by providing more comprehensive, relevant and explainable responses using RAG techniques combined with graph data. Amazon Bedrock Knowledge Bases offers fully-managed, end-to-end Retrieval-Augmented Generation (RAG) workflows to create highly accurate, low latency, and custom Generative AI applications by incorporating contextual information from your company's data sources. Amazon Bedrock Knowledge Bases now offers a fully-managed GraphRAG capability with Amazon Neptune Analytics. Previously, customers faced challenges in conducting exhaustive, multi-step searches across disparate content. By identifying key entities across documents, GraphRAG delivers insights that leverage relationships within the data, enabling improved responses to end users. For example, users can ask a travel application for family-friendly beach destinations with direct flights and good seafood restaurants. Developers building Generative AI applications can enable GraphRAG in just a few clicks by specifying their data sources and choosing Amazon Neptune Analytics as their vector store when creating a knowledge base. This will automatically generate and store vector embeddings in Amazon Neptune Analytics, along with a graph representation of entities and their relationships. GraphRAG with Amazon Neptune is built right into Amazon Bedrock Knowledge Bases, offering an integrated experience with no additional setup or additional charges beyond the underlying services. GraphRAG is available in AWS Regions where Amazon Bedrock Knowledge Bases and Amazon Neptune Analytics are both available (see current list of supported regions). To learn more, visit the Amazon Bedrock User Guide.

Announcing Amazon SageMaker HyperPod recipes

Published Date: 2024-12-04 18:00:00

Amazon SageMaker HyperPod recipes help you get started training and fine-tuning publicly available foundation models (FMs) in minutes with state-of-the-art performance. SageMaker HyperPod helps customers scale generative AI model development across hundreds or thousands of AI accelerators with built-in resiliency and performance optimizations, decreasing model training time by up to 40%. However, as FM sizes continue to grow to hundreds of billions of parameters, the process of customizing these models can take weeks of extensive experimenting and debugging. In addition, performing training optimizations to unlock better price performance is often unfeasible for customers, as they often require deep machine learning expertise that could cause further delays in time to market.?

With SageMaker HyperPod recipes, customers of all skill sets can benefit from state-of-the-art performance while quickly getting started training and fine-tuning popular publicly available FMs, including Llama 3.1 405B, Mixtral 8x22B, and Mistral 7B. SageMaker HyperPod recipes include a training stack tested by AWS, removing weeks of tedious work experimenting with different model configurations.?You can also quickly switch between GPU-based and AWS Trainium-based instances with a one-line recipe change and enable automated model checkpointing for improved training resiliency. Finally, you can run workloads in production on the SageMaker AI training service of your choice.?

SageMaker HyperPod recipes are available in all AWS Regions where SageMaker HyperPod and SageMaker training jobs are supported. To learn more and get started, visit the SageMaker HyperPod page and?blog.

Announcing scenarios analysis capability of Amazon Q in QuickSight (preview)

Published Date: 2024-12-04 18:00:00

A new scenario analysis capability of Amazon Q in QuickSight is now available in preview. This new capability provides an AI-assisted data analysis experience that helps you make better decisions, faster.?Amazon Q in QuickSight simplifies in-depth analysis with step-by-step guidance, saving hours of manual data manipulation and unlocking data-driven decision-making across your organization.

Amazon Q in QuickSight helps business users perform complex scenario analysis up to 10x faster than spreadsheets. You can ask a question or state your goal in natural language and Amazon Q in QuickSight guides you through every step of advanced data analysis—suggesting analytical approaches, automatically analyzing data, surfacing relevant insights, and summarizing findings with suggested actions.?This agentic approach breaks down data analysis into a series of easy-to-understand, executable steps, helping you find solutions to complex problems without specialized skills or tedious, error-prone data manipulation in spreadsheets.?Working on an expansive analysis canvas,?you can intuitively iterate your way to solutions by directly interacting with data, refining analysis steps, or exploring multiple analysis paths side-by-side. This scenario analysis capability is accessible from any Amazon QuickSight dashboard, so you can move seamlessly from visualizing data to modelling solutions. With Amazon Q in QuickSight, you can easily modify, extend, and reuse previous analyses, helping you?quickly adapt to changing business needs.

Amazon Q in QuickSight Pro users can use this new capability in preview in the following AWS regions: US East (N. Virginia) and US West (Oregon). To learn more, visit the Amazon Q in QuickSight documentation and read the AWS News Blog.

Amazon Bedrock Knowledge Bases now processes multimodal data

Published Date: 2024-12-04 18:00:00

Amazon Bedrock Knowledge Bases now enables developers to build generative AI applications that can analyze and leverage insights from both textual and visual data, such as images, charts, diagrams, and tables. Bedrock Knowledge Bases offers end-to-end managed Retrieval-Augmented Generation (RAG) workflow that enables customers to create highly accurate, low-latency, secure, and custom generative AI applications by incorporating contextual information from their own data sources. With this launch, Bedrock Knowledge Bases extracts content from both text and visual data, generates semantic embeddings using the selected embedding model, and stores them in the chosen vector store. This enables users to retrieve and generate answers to questions derived not only from text but also from visual data. Additionally, retrieved results now include source attribution for visual data, enhancing transparency and building trust in the generated outputs. To get started, customers can choose between: Amazon Bedrock Data Automation, a managed service that automatically extracts content from multimodal data (currently in Preview), or FMs such as Claude 3.5 Sonnet or Claude 3 Haiku, with the flexibility to customize the default prompt. Multimodal data processing with Bedrock Data Automation is available in the US West (Oregon) region in preview. FM-based parsing is supported in all regions where Bedrock Knowledge Bases is available. For details on pricing for using Bedrock Data Automation or FM as a parser, please refer to the pricing page. To learn more, visit Amazon Bedrock Knowledge Bases product documentation.

Amazon Bedrock Intelligent Prompt Routing is now available in preview

Published Date: 2024-12-04 18:00:00

Amazon Bedrock Intelligent Prompt Routing routes prompts to different foundational models within a model family, helping you optimize for quality of responses and cost. Using advanced prompt matching and model understanding techniques, Intelligent Prompt Routing predicts the performance of each model for each request and dynamically routes each request to the model that it predicts is most likely to give the desired response at the lowest cost. Customers can choose from two prompt routers in preview that route requests either between Claude Sonnet 3.5 and Claude Haiku, or between Llama 3.1 8B and Llama 3.1 70B. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while ensuring customer trust and data governance. With Intelligent Prompt Routing, Amazon Bedrock can help customers build cost-effective generative AI applications with a combination of foundation models to get better performance at lower cost than a single foundation model. During preview, customers are charged regular on-demand pricing for the models that requests are routed to. Learn more in our documentation and blog.

Announcing GenAI Index in Amazon Kendra

Published Date: 2024-12-04 18:00:00

Amazon Kendra is an AI-powered search service enabling organizations to build intelligent search experiences and retrieval augmented generation (RAG) systems to power generative AI applications. Starting today, AWS customers can use a new index - the GenAI Index for RAG and intelligent search. With the Kendra GenAI Index, customers get high out-of-the-box search accuracy powered by the latest information retrieval technologies and semantic models. Kendra GenAI Index supports mobility across AWS generative AI services like Amazon Bedrock Knowledge Base and Amazon Q Business, giving customers the flexibility to use their indexed content across different use cases. It is available as a managed retriever in Bedrock Knowledge Bases, enabling customers to create a Knowledge Base powered by the Kendra GenAI Index. Customers can also integrate such Knowledge Bases with other Bedrock Services like Guardrails, Prompt Flows, and Agents to build advanced generative AI applications. The GenAI Index supports connectors for 43 different data sources, enabling customers to easily ingest content from a variety of sources. Kendra GenAI Index is available in the US East (N. Virginia) and US West (Oregon) regions. To learn more, see Kendra GenAI Index in the Amazon Kendra Developer Guide. For pricing, please refer to Kendra pricing page.

Start collaborating on multi-partner opportunities with Partner Connections (Preview)

Published Date: 2024-12-04 18:00:00

Today, AWS Partner Central announces the preview of Partner Connections, a new feature allowing AWS Partners to discover and connect with other Partners for collaboration on shared customer opportunities. With Partner Connections, Partners can co-sell joint solutions, accelerate deal progression, and expand their reach by teaming with other AWS Partners. At the core of Partner Connections are two key capabilities: connections discovery and multi-partner opportunities. The connections discovery feature uses AI-powered recommendations to streamline Partner matchmaking, making it easier for Partners to find suitable collaborators and add them to their network. With multi-partner opportunities, Partners can work together seamlessly to create and manage joint customer opportunities in APN Customer Engagements (ACE). This integrated approach allows Partners to work seamlessly with AWS and other Partners on shared opportunities, reducing the operational overhead of managing multi-partner opportunities. Partners can also create, update, and share multi-partner opportunities using the Partner Central API for Selling. This allows Partners to collaborate with other Partners and AWS on joint sales opportunities from their own customer relationship management (CRM) system. Partner Connections (Preview) is available to all eligible AWS Partners who have signed the ACE Terms and Conditions and have linked their AWS account to their Partner Central account. To get started, log in to AWS Partner Central and review the ACE user guide for more information. To see how Partner Connections works, read the blog.

Introducing the AWS Digital Sovereignty Competency

Published Date: 2024-12-04 18:00:00

Digital sovereignty has been a priority for AWS since its inception. AWS remains committed to offering customers the most advanced sovereignty controls and features in the cloud. With the increasing importance of digital sovereignty for public sector organizations and regulated industries, AWS is excited to announce the launch of the AWS Digital Sovereignty Competency. The AWS Digital Sovereignty Competency curates and validates a community of AWS Partners with advanced sovereignty capabilities and solutions, including deep experience in helping customers address sovereignty and compliance requirements. These partners can assist customers with residency control, access control, resilience, survivability, and self-sufficiency. Through this competency, customers can search for and engage with trusted local and global AWS Partners that have technically validated experience in addressing customers’ sovereignty requirements. Many partners have built sovereign solutions that leverage AWS innovations and built-in controls and security features. In addition to these offerings, AWS Digital Sovereignty Partners provide skills and knowledge of local compliance requirements and regulations, making it easier for customers to meet their digital sovereignty requirements while benefiting from the performance, agility, security, and scale of the AWS Cloud.

AWS Security Competency Update: New AI Security Category

Published Date: 2024-12-04 18:00:00

Introducing a new AI Security category in the AWS Security competency to help customers easily identify AWS Partners with deep experience securing AI environments, and defending AI workloads against advanced threats and attacks. Partners in this new category are validated for their capabilities in areas like prevention of sensitive data disclosure, prevention of injection attacks, security posture management, implementing responsible AI filtering, and more. The rapid adoption of AI, and especially generative AI is transforming how customers build applications, but also introduces new security risks that require specialized expertise. Customers need solutions that can secure AI models, tools, datasets, and other deployment resources used in these applications. Unlock the power of AI, while keeping your AI applications, and data safe with validated partner solutions. Learn more about the AWS Security competency and explore validated partners with customer success in the new AI Security category. ?

AWS announces Amazon SageMaker Partner AI Apps

Published Date: 2024-12-04 18:00:00

Today Amazon Web Services, Inc. (AWS) announced the general availability of Amazon SageMaker partner AI apps, a new capability that enables customers to easily discover, deploy, and use best-in-class machine learning (ML) and generative AI (GenAI) development applications from leading app providers privately and securely, all without leaving Amazon SageMaker AI so they can develop performant AI models faster. Until today, integrating purpose-built GenAI and ML development applications that provide specialized capabilities for a variety of model development tasks, required a considerable amount of effort. Beyond the need to invest time and effort in due diligence to evaluate existing offerings, customers had to perform undifferentiated heavy lifting in deploying, managing, upgrading and scaling these applications. Furthermore, to adhere to rigorous security and compliance protocols, organizations need their data to stay within the confines of their security boundaries without needing to move their data elsewhere, for example, to a Software as a Service (SaaS) application. Finally, the resulting developer experience is often fragmented, with developers having to switch back and forth between multiple disjointed interfaces. With SageMaker partner AI apps you can quickly subscribe to a partner solution and seamlessly integrate the app with your SageMaker development environment. SageMaker partner AI apps are fully managed and run privately and securely in your SageMaker environment reducing the risk of data and model exfiltration. At launch, you will be able to boost your team’s productivity and reduce time to market by enabling: Comet, to track, visualize, and manage experiments for AI model development; Deepchecks, to evaluate quality and compliance for AI models; Fiddler, to validate, monitor, analyze, and improve AI models in production; and, Lakera, to protect AI applications from security threats such as prompt attacks, data loss and inappropriate content. SageMaker partner AI apps is available in all currently supported regions except Gov Cloud. To learn more please visit SageMaker partner AI app’s developer guide. ?

Amazon SageMaker HyperPod now provides flexible training plans

Published Date: 2024-12-04 18:00:00

Amazon SageMaker HyperPod announces flexible training plans, a new capability that allows?you to train generative AI models within your timelines and budgets. Gain predictable model training timelines and run training workloads within your budget requirements, while continuing to benefit from features of SageMaker HyperPod such as resiliency, performance-optimized distributed training, and enhanced observability and monitoring.?

In a few quick steps, you can specify your preferred compute instances, desired amount of compute resources, duration of your workload, and preferred start date for your generative AI model training. SageMaker then helps you create the most cost-efficient training plans, reducing time to train your model by weeks. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the training workloads on these compute resources without requiring any manual intervention. SageMaker also automatically takes care of pausing and resuming training between gaps in compute availability, as the plan switches from one capacity block to another. If you wish to remove all the heavy lifting of infrastructure management, you can also create and run training plans using SageMaker fully managed training jobs.??

SageMaker HyperPod flexible training plans are available in the US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS Regions. To learn more, visit:?SageMaker HyperPod, documentation, and the announcement blog.?

Amazon Bedrock Marketplace brings over 100 models to Amazon Bedrock

Published Date: 2024-12-04 18:00:00

Amazon Bedrock Marketplace provides generative AI developers access to over 100 publicly available and proprietary foundation models (FMs), in addition to Amazon Bedrock’s industry-leading, serverless models. Customers deploy these models onto SageMaker endpoints where they can select their desired number of instances and instance types. Amazon Bedrock Marketplace models can be accessed through Bedrock’s unified APIs, and models which are compatible with Bedrock’s Converse APIs can be used with Amazon Bedrock’s tools such as Agents, Knowledge Bases, and Guardrails.

Amazon Bedrock Marketplace empowers generative AI developers to rapidly test and incorporate a diverse array of emerging, popular, and leading FMs of various types and sizes. Customers can choose from a variety of models tailored to their unique requirements, which can help accelerate the time-to-market, improve the accuracy, or reduce the cost of their generative AI workflows. For example, customers can incorporate models highly-specialized for finance or healthcare, or language translation models for Asian languages, all from a single place.

Amazon Bedrock Marketplace is supported in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Paris), and South America (S?o Paulo). For more information, please refer to Amazon Bedrock Marketplace's announcement blog or documentation.

AWS Education Equity Initiative to boost education for underserved learners

Published Date: 2024-12-04 18:00:00

Amazon announces a five-year commitment of cloud technology and technical support for organizations creating digital learning solutions that expand access for underserved learners worldwide through the AWS Education Equity Initiative. While the use of educational technologies continues to rise, many organizations lack access to cloud computing and AI resources needed to accelerate and scale their work to reach more learners in need. Amazon is committing up to $100 million in AWS credits and technical advising to support socially-minded organizations build and scale learning solutions that utilize cloud and AI technologies. This will help reduce initial financial barriers and provide guidance on building and scaling AI-powered education solutions using AWS technologies. Eligible recipients, including socially-minded edtechs, social enterprises, non-profits, governments, and corporate social responsibility teams, must demonstrate how their solution will benefit students from underserved communities. The initiative is now accepting applications. To learn more and how to apply, visit the AWS Education Equity Initiative page.

Task governance is now generally available for Amazon SageMaker HyperPod

Published Date: 2024-12-04 18:00:00

Amazon SageMaker HyperPod now provides you with centralized governance across all generative AI development tasks, such as training and inference. You have full visibility and control over compute resource allocation, ensuring the most critical tasks are prioritized and maximizing compute resource utilization, reducing model development costs by up to 40%. With HyperPod task governance, administrators can more easily define priorities for different tasks and set up limits for how many compute resources each team can use. At any given time, administrators can also monitor and audit the tasks that are running or waiting for compute resources through a visual dashboard. When data scientists create their tasks, HyperPod automatically runs them, adhering to the defined compute resource limits and priorities. For example, when training for a high-priority model needs to be completed as soon as possible but all compute resources are in use, HyperPod frees up resources from lower-priority tasks to support the training. HyperPod pauses the low-priority task, saves the checkpoint, and reallocates the freed-up compute resources. The preempted low-priority task will resume from the last saved checkpoint as resources become available again. And when a team is not fully using the resource limits the administrator has set up, HyperPod use those idle resources to accelerate another team’s tasks. Additionally, HyperPod is now integrated with Amazon SageMaker Studio, bringing task governance and other HyperPod capabilities into the Studio environment. Data scientists can now seamlessly interact with HyperPod clusters directly from Studio, allowing them to develop, submit, and monitor machine learning (ML) jobs on powerful accelerator-backed clusters. Task governance for HyperPod is available in all AWS Regions where HyperPod is available: US East (N. Virginia), US West (N. California), US West (Oregon), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), and Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Stockholm), and South America (S?o Paulo). To learn more, visit SageMaker HyperPod webpage, AWS News Blog, and SageMaker AI documentation.

Amazon Bedrock Guardrails supports multimodal toxicity detection for image content (Preview)

Published Date: 2024-12-04 18:00:00

Organizations are increasingly using applications with multimodal data to drive business value, improve decision-making, and enhance customer experiences. Amazon Bedrock Guardrails now supports multimodal toxicity detection for image content, enabling organizations to apply content filters to images. This new capability with Guardrails, now in public preview, removes the heavy lifting required by customers to build their own safeguards for image data or spend cycles with manual evaluation that can be error-prone and tedious. Bedrock Guardrails helps customers build and scale their generative AI applications responsibly for a wide range of use cases across industry verticals including healthcare, manufacturing, financial services, media and advertising, transportation, marketing, education, and much more. With this new capability, Amazon Bedrock Guardrails offers a comprehensive solution, enabling the detection and filtration of undesirable and potentially harmful image content while retaining safe and relevant visuals. Customers can now use content filters for both text and image data in a single solution with configurable thresholds to detect and filter undesirable content across categories such as hate, insults, sexual, and violence, and build generative AI applications based on their responsible AI policies. This new capability in preview is available with all foundation models (FMs) on Amazon Bedrock that support images including fine-tuned FMs in 11 AWS regions globally: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (London), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Asia Pacific (Mumbai), and AWS GovCloud (US-West). To learn more, visit the Amazon Bedrock Guardrails product page, read the News blog, and documentation.

Announcing new AWS AI Service Cards to advance responsible generative AI

Published Date: 2024-12-04 18:00:00

Today, AWS announces the availability of new AWS AI Service Cards for Amazon Nova Reel; Amazon Canvas; Amazon Nova Micro, Lite, and Pro; Amazon Titan Image Generator; and Amazon Titan Text Embeddings. AI Service Cards are a resource designed to enhance transparency by providing customers with a single place to find information on the intended use cases and limitations, responsible AI design choices, and performance optimization best practices for AWS AI services. AWS AI Service Cards are part of our comprehensive development process to build services in a responsible way. They focus on key aspects of AI development and deployment, including fairness, explainability, privacy and security, safety, controllability, veracity and robustness, governance, and transparency. By offering these cards, AWS aims to empower customers with the knowledge they need to make informed decisions about using AI services in their applications and workflows. Our AI Service Cards will continue to evolve and expand as we engage with our customers and the broader community to gather feedback and continually iterate on our approach. For more information, see the AI Service Cards for

To learn more about AI Service Cards, as well as our broader approach to building AI in a responsible way, see our Responsible AI webpage.

Amazon Bedrock announces preview of prompt caching

Published Date: 2024-12-04 18:00:00

Today, AWS announces that Amazon Bedrock now supports prompt caching. Prompt caching is a new capability that can reduce costs by up to 90% and latency by up to 85% for supported models by caching frequently used prompts across multiple API calls. It allows you to cache repetitive inputs and avoid reprocessing context, such as long system prompts and common examples that help guide the model’s response. When cache is used, fewer computing resources are needed to generate output. As a result, not only can we process your request faster, but we can also pass along the cost savings from using fewer resources. Amazon Bedrock is a fully managed service that offers a choice of high-performing FMs from leading AI companies via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI capabilities built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustained growth from generative AI while providing tools to build customer trust and data governance. Prompt caching is now available on Claude 3.5 Haiku and Claude 3.5 Sonnet v2 in US West (Oregon) and US East (N. Virginia) via cross-region inference, and Nova Micro, Nova Lite, and Nova Pro models in US East (N. Virginia). At launch, only a select number of customers will have access to this feature. To learn more about participating in the preview, see this page. To learn more about prompt caching, see our documentation and blog.

Amazon Q Developer can now guide SageMaker Canvas users through ML development

Published Date: 2024-12-04 18:00:00

Starting today, you can build ML models using natural language with Amazon Q Developer, now available in Amazon SageMaker Canvas in preview. You can now get generative AI-powered assistance through the ML lifecycle, from data preparation to model deployment. With Amazon Q Developer, users of all skill levels can use natural language to access expert guidance to build high-quality ML models, accelerating innovation and time to market. Amazon Q Developer will break down your objective into specific ML tasks, define the appropriate ML problem type, and apply data preparation techniques to your data. Amazon Q Developer then guides you through the process of building, evaluating, and deploying custom ML models. ML models produced in SageMaker Canvas with Amazon Q Developer are production ready, can be registered in SageMaker Studio, and the code can be shared with data scientists for integration into downstream MLOps workflows. Amazon Q Developer is available in SageMaker Canvas in preview in the following AWS Regions: US East (N. Virginia), US West (Oregon), Europe (Frankfurt), Europe (Paris), Asia Pacific (Tokyo), and Asia Pacific (Seoul). To learn more about using Amazon Q Developer with SageMaker Canvas, visit the website, read the AWS News blog, or view the technical documentation.

Buy with AWS accelerates solution discovery and procurement on AWS Partner websites

Published Date: 2024-12-04 18:00:00

Today, AWS Marketplace announces Buy with AWS, a new feature that helps accelerate discovery and procurement on AWS Partners’ websites for products available in AWS Marketplace. Partners that sell or resell products in AWS Marketplace can now offer new experiences on their websites that are powered by AWS Marketplace. Customers can more quickly identify solutions from Partners that are available in AWS Marketplace and use their AWS accounts to access a streamlined purchasing experience. Customers browsing on Partner websites can explore products that are “Available in AWS Marketplace” and request demos, access free trials, and request custom pricing. Customers can conveniently and securely make purchases by clicking the Buy with AWS button and completing transactions by logging in to their AWS accounts. All purchases made through Buy with AWS are transacted and managed within AWS Marketplace, allowing customers to take advantage of benefits such as consolidated AWS billing, centralized subscriptions management, and access to cost optimization tools. For AWS Partners, Buy with AWS provides a new way to engage website visitors and accelerate the path-to-purchase for customers. By adding Buy with AWS buttons to Partner websites, Partners can give website visitors the ability to subscribe to free trials, make purchases, and access custom pricing using their AWS accounts. Partners can complete an optional integration and build new experiences on websites that allow customers to search curated product listings and filter products from the AWS Marketplace catalog. Learn more about making purchases using Buy with AWS. Learn how AWS Partners can start selling using Buy with AWS.

Amazon Bedrock Data Automation now available in preview

Published Date: 2024-12-04 18:00:00

Today, we are announcing the preview launch of Amazon Bedrock Data Automation (BDA), a new feature of Amazon Bedrock that enables developers to automate the generation of valuable insights from unstructured multimodal content such as documents, images, video, and audio to build GenAI-based applications. These insights include video summaries of key moments, detection of inappropriate image content, automated analysis of complex documents, and much more. Developers can also customize BDA’s output to generate specific insights in consistent formats required by their systems and applications. By leveraging BDA, developers can reduce development time and effort, making it easier to build intelligent document processing, media analysis, and other multimodal data-centric automation solutions. BDA offers high accuracy at lower cost than alternative solutions, along with features such as visual grounding with confidence scores for explainability and built-in hallucination mitigation. This ensures accurate insights from unstructured, multi-modal data content. Developers can get started with BDA on the Bedrock console, where they can configure and customize output using their sample data. They can then integrate BDA’s unified multi-modal inference API into their applications to process their unstructured content at scale with high accuracy and consistency. BDA is also integrated with Bedrock Knowledge Bases, making it easier for developers to generate meaningful information from their unstructured multi-modal content to provide more relevant responses for retrieval augmented generation (RAG). Bedrock Data Automation is available in preview in US West (Oregon) AWS Region. To learn more, visit the Bedrock Data Automation page.

Respond and recovery more quickly with AWS Security Incident Response Partners

Published Date: 2024-12-04 18:00:00

Today, AWS Security Incident Response launches a new AWS Specialization with approved partners from the AWS Partner Network (APN). AWS customers today rely on various 3rd party tools and services to support their internal security incident response capabilities. To better help both customers and partners, AWS introduced AWS Security Incident Response, a new service that helps customers prepare for, respond to, and recover from security events. Alongside approved AWS Partners, AWS Security Incident Response monitors, investigates, and escalates triaged security findings from Amazon GuardDuty and other threat detection tools through AWS Security Hub. Security Incident Response identifies and escalates only high-priority incidents. Partners and customers can also leverage collaboration and communication features to streamline coordinated incident response for faster reaction and recovery. For example, service members can create ?a predefined "Incident Response Team" that is automatically alerted whenever a security case is escalated. Alerted members, which includes customers and partners, can then communicate and collaborate in a centralized format, with native feature integrations such as in-console messaging, video conferencing, and quick and secure data transfer.?

Customers can access the service alongside AWS Partners that have been vetted and approved to use Security Incident Response. Learn more and explore AWS Security Incident Response Partners with specialized expertise to help you respond when it matters most.

Introducing the Amazon Security Lake Ready Specialization

Published Date: 2024-12-04 18:00:00

We are excited to announce the new Amazon Security Lake Ready Specialization, which recognizes AWS Partners who have technically validated their software solutions to integrate with Amazon Security Lake and demonstrated successful customer deployments. These solutions have been technically validated by AWS Partner Solutions Architects for their sound architecture and proven customer success. Security Lake Ready software solutions can either contribute data to the Security Lake or consume this data and provide analytics, delivering a cohesive security solution for AWS customers. Amazon Security Lake automates data management tasks for customers, reducing costs and consolidating security data that customers own. It uses the Open Cybersecurity Schema Framework (OCSF), an open standard that helps customers address the challenges of data normalization and schema mapping across multiple log sources. With Amazon Security Lake Ready software solutions, customers now have a single place with verified partner solutions where security data can be stored in an open-source format, ready for identifying potential threats and vulnerabilities, and for security investigations and analytics. Explore Amazon Security Lake Ready software solutions that can help your organization improve the protection of workloads, applications, and data by significantly reducing the operational overhead of managing security data. To learn more about how to become an Amazon Security Lake Ready Partner, visit the AWS Service Ready Program webpage. ?

Amazon Bedrock Knowledge Bases now supports structured data retrieval

Published Date: 2024-12-04 18:00:00

Amazon Bedrock Knowledge Bases now supports natural language querying to retrieve structured data from your data sources. With this launch, Bedrock Knowledge Bases offers an end-to-end managed workflow for customers to build custom generative AI applications that can access and incorporate contextual information from a variety of structured and unstructured data sources. Using advanced natural language processing, Bedrock Knowledge Bases can transform natural language queries into SQL queries, allowing users to retrieve data directly from the source without the need to move or preprocess the data. Developers often face challenges integrating structured data into generative AI applications. This includes difficulties training large language models (LLMs) to convert natural language queries to SQL queries based on complex database schemas, as well as ensuring appropriate data governance and security controls are in place. Bedrock Knowledge Bases eliminates these hurdles by providing a managed natural language to SQL (NL2SQL) module. A retail analyst can now simply ask "What were my top 5 selling products last month?", and then Bedrock Knowledge Base automatically translates that query into SQL, execute the query against the database, and return the results - or even provide a summarized narrative response. To generate accurate SQL queries, Bedrock Knowledge Base leverages database schema, previous query history, and other contextual information that are provided about the data sources. Bedrock Knowledge Bases supports structured data retrieval from Amazon Redshift and Amazon Sagemaker Lakehouse at this time and is available in all commercial regions where Bedrock Knowledge Bases is supported. To learn more, visit here and here. For details on pricing, please refer here.

Amazon Q Developer transformation capabilities for mainframe modernization are now available (Preview)

Published Date: 2024-12-03 18:00:00

Today, AWS announces new generative AI–powered capabilities of Amazon Q Developer in public preview to help customers and partners accelerate large-scale assessment and modernization of mainframe applications.

Amazon Q Developer is enterprise-ready, offering a unified web experience tailored for large-scale modernization, federated identity, and easier collaboration. Keeping you in the loop, Amazon Q Developer agents analyze and document your code base, identify missing assets, decompose monolithic applications into business domains, plan modernization waves, and refactor code. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives, source repository access, and project context. Amazon Q Developer agents autonomously classify and organize application assets and create comprehensive code documentation to understand and expand the knowledge base of your organization. The agents combine goal-driven reasoning using generative AI and modernization expertise to develop modernization plans customized for your code base and transformation objectives. You can then collaboratively review, adjust, and approve the plans through iterative engagement with the agents. Once you approve the proposed plan, Amazon Q Developer agents autonomously refactor the COBOL code into cloud-optimized Java code while preserving business logic.

By delegating tedious tasks to autonomous Amazon Q Developer agents with your review and approvals, you and your team can collaboratively drive faster modernization, larger project scale, and better transformation quality and performance using generative AI large language models. You can enhance governance and compliance by maintaining a well-documented and explainable trail of transformation decisions.

To learn more, read the blog and visit Amazon Q Developer transformation capabilities webpage and documentation.

Amazon EC2 Trn2 instances are generally available

Published Date: 2024-12-03 18:00:00

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) Trn2 instances and preview of Trn2 UltraServers, powered by AWS Trainium2 chips.?Available via EC2 Capacity Blocks, Trn2 instances and UltraServers are the most powerful EC2 compute solutions for deep learning and generative AI training and inference.

You can use Trn2 instances to train and deploy the most demanding foundation models including large language models (LLMs), multi-modal models, diffusion transformers and more to build a broad set of AI applications. To reduce training times and deliver breakthrough response times (per-token-latency) for the most capable, state-of-the-art models you might need more compute and memory than a single instance can deliver. Trn2 UltraServers is a completely new EC2 offering that uses NeuronLink, a high-bandwidth, low-latency fabric, to connect 64 Trainium2 chips across 4 Trn2 instances into one node unlocking unparalleled performance. For inference, UltraServers help deliver industry-leading response times to create the best real-time experiences. For training, UltraServers boost model training speed and efficiency with faster collective communication for model parallelism as compared to standalone instances.

Trn2 instances feature 16 Trainium2 chips to deliver up to 20.8 petaflops of FP8 compute, 1.5 TB high bandwidth memory with 46 TB/s of memory bandwidth, and 3.2 Tbps of EFA networking. Trn2 UltraServers feature 64 Trainium2 chips to deliver up to 83.2 petaflops of FP8 compute, 6 TB of total high bandwidth memory with 185 TB/s of total memory bandwidth, and 12.8 Tbps of EFA networking. They both are deployed in EC2 UltraClusters to provide non-blocking, petabit scale-out capabilities for distributed training. Trn2 instances are generally available in the trn2.48xlarge size in the US East (Ohio) AWS Region through EC2 Capacity Blocks for ML.

To learn more about Trn2 instances and request access to Trn2 UltraServers please visit the Trn2 instances page.?

Announcing GitLab Duo with Amazon Q (Preview)

Published Date: 2024-12-03 18:00:00

Today, AWS announces a preview of GitLab Duo with Amazon Q, embedding advanced agent capabilities for software development and workload transformation directly in GitLab's enterprise DevSecOps platform. With this launch, GitLab Duo with Amazon Q delivers a seamless development experience across tasks and teams, automating complex, multi-step tasks for software development, security, and transformation —all using the familiar GitLab workflows developers already know.?

Using GitLab Duo, developers can delegate issues to Amazon Q agents using quick actions. to build new features faster, maximize quality and security with AI-assisted code reviews, create and execute unit tests, and upgrade a legacy Java codebase. GitLab’s unified data store across the software development life cycle (SDLC) gives Amazon Q project context to accelerate and automate end-to-end workflows for software development, simplifying the complex toolchains historically required for collaboration across teams.

  • Streamline software development: Go from new feature idea in an issue, to merge-ready code in minutes. Iterate directly from GitLab, using feedback in comments to accelerate development workflows from end-to-end.
  • Optimize code: Generate unit tests for new merge request to save developer time and ensure consistent quality assurance practices are enforced across teams.
  • Maximize quality and security: Provide AI-driven code quality, security reviews and generated fixes to accelerate feedback cycles.
  • Transform enterprise workloads: Starting with Java 8 or 11 codebases, developers can upgrade to Java 17 directly from a GitLab project to improve application security, performance, and remove technical debt.

Visit the Amazon Q Developer integrations page to learn more.

Announcing the preview of Amazon SageMaker Unified Studio

Published Date: 2024-12-03 18:00:00

Today, AWS announces the next generation of Amazon SageMaker, including the preview launch of Amazon SageMaker Unified Studio,?an integrated data and AI development environment that enables collaboration and helps teams build data products faster. SageMaker Unified Studio brings together familiar tools from AWS analytics and AI/ML services for data processing, SQL analytics, machine learning model development, and generative AI application development. Amazon SageMaker Lakehouse, which is accessible through SageMaker Unified Studio, provides open source compatibility and access to data stored across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third- party and federated data sources.?Enhanced governance features are built in to help you meet enterprise security?requirements.

SageMaker Unified Studio allows you to ?nd, access, and query data and AI assets across your organization, then work together in projects to securely build and share analytics and AI artifacts, including data, models, and generative AI applications. SageMaker Unified Studio offers the capabilities to build integrated data pipelines with visual extract, transform, and load (ETL), develop ML models, and create custom generative AI applications. New uni?ed Jupyter Notebooks enable seamless work across different compute resources and clusters, while an integrated SQL editor lets you query your data stored in various sources—all within a single, collaborative environment. Amazon Bedrock IDE, formerly Amazon Bedrock Studio, is now part of the SageMaker Unified Studio in public preview, offering the capabilities to rapidly build and customize generative AI applications. Amazon Q Developer, the most capable generative AI assistant for software development, is integrated into SageMaker Unified Studio to accelerate and streamline tasks across the development lifecycle.

For more information on AWS Regions where SageMaker Unified Studio is available in preview, see Supported Regions.

To get started, see the following resources:

Data Lineage is now generally available in Amazon DataZone and next generation of Amazon SageMaker

Published Date: 2024-12-03 18:00:00

AWS announces general availability of Data Lineage in Amazon DataZone and next generation of Amazon SageMaker, a capability that automatically captures lineage from AWS Glue and Amazon Redshift to visualize lineage events from source to consumption. Being OpenLineage compatible, this feature allows data producers to augment the automated lineage with lineage events captured from OpenLineage-enabled systems or through API, to provide a comprehensive data movement view to data consumers. This feature automates lineage capture of schema and transformations of data assets and columns from AWS Glue, Amazon Redshift, and Spark executions in tools to maintain consistency and reduce errors. With in-built automation, domain administrators and data producers can automate capture and storage of lineage events when data is configured for data sharing in the business data catalog. Data consumers can gain confidence in an asset's origin from the comprehensive view of its lineage while data producers can assess the impact of changes to an asset by understanding its consumption. Additionally, the data lineage feature versions lineage with each event, enabling users to visualize lineage at any point in time or compare transformations across an asset's or job's history. This historical lineage provides a deeper understanding of how data has evolved, essential for troubleshooting, auditing, and validating the integrity of data assets. The data lineage feature is generally available in all AWS Regions where Amazon DataZone and next generation of Amazon SageMaker are available. To learn more, visit Amazon DataZone and next generation of Amazon SageMaker. ?

Amazon Q in QuickSight unifies insights from structured and unstructured data

Published Date: 2024-12-03 18:00:00

Now generally available, Amazon Q in QuickSight provides users with unified insights from structured and unstructured data sources through integration with Amazon Q Business. While structured data is managed in conventional systems, unstructured data such as document libraries, webpages, images and more has remained largely untapped due to its diverse and distributed nature. With Amazon Q in QuickSight business users can now augment insights from traditional BI data sources such as databases, data lakes and data warehouses, with contextual information from unstructured sources. Users can get augmented insights within QuickSight's BI interface across multi-visual Q&A and Data Stories. Users can use multi-visual Q&A to ask questions in natural language and get visualizations and data summaries augmented with contextual insights from Amazon Q Business. With data stories in Amazon Q in QuickSight users can upload documents, or connect to unstructured data sources from Amazon Q Business to create richer narratives or presentations explaining their data with additional context. This integration enables organizations to harness insights from all their data without the need for manual collation, leading to more informed decision-making, time savings, and a significant competitive edge in the data-driven business landscape. This new capability is generally available to all Amazon QuickSight Pro Users in US East (N. Virginia), and US West (Oregon) AWS Regions. To learn more visit the AWS Business Intelligence Blog, the Amazon Q Business What’s New Post and try QuickSight free for 30-days. ?

Announcing Amazon S3 Tables – Fully managed Apache Iceberg tables optimized for analytics workloads

Published Date: 2024-12-03 18:00:00

Amazon S3 Tables deliver the first cloud object store with built-in Apache Iceberg support, and the easiest way to store tabular data at scale. S3 Tables are specifically optimized for analytics workloads, resulting in up to 3x faster query throughput and up to 10x higher transactions per second compared to self-managed tables. With S3 Tables support for the Apache Iceberg standard, your tabular data can be easily queried by popular AWS and third-party query engines. Additionally, S3 Tables are designed to perform continual table maintenance to automatically optimize query efficiency and storage cost over time, even as your data lake scales and evolves.?S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight.

S3 Tables introduce table buckets, a new bucket type that is purpose-built to store tabular data. With table buckets, you can quickly create tables and set up table-level permissions to manage access to your data lake. You can then load and query data in your tables with standard SQL, and take advantage of Apache Iceberg’s advanced analytics capabilities such as row-level transactions, queryable snapshots, schema evolution, and more. Table buckets also provide policy-driven table?maintenance, helping you to automate operational tasks such as compaction, snapshot management, and unreferenced file removal.

Amazon S3 Tables are now available in the?US East (N. Virginia), US East (Ohio), and US West (Oregon)?Regions, and coming soon to additional Regions.?For pricing details, visit the?S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Amazon Q Developer can now generate documentation within your source code

Published Date: 2024-12-03 18:00:00

Starting today, Amazon Q Developer can document your code by automatically generating readme files and data-flow diagrams within your projects.?

Today, developers report they spend an average of just one hour per day coding. They spend most of their time on tedious, undifferentiated tasks such as learning codebases, writing and reviewing documentation, testing, managing deployments, troubleshooting issues or finding and fixing vulnerabilities. Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. With this new capability, Q Developer can help you understand your existing code bases faster, or quickly document new features, so you can focus on shipping features for your customers.

This capability is available in the integrated development environment (IDE) through a new chat command: /doc . You can get started generating documentation within the Visual Studio Code and IntelliJ IDEA IDEs with an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer?pricing.

This capability is available in all AWS Regions where Amazon Q Developer is?available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.?

Announcing Amazon Bedrock IDE in preview as part of Amazon SageMaker Unified Studio

Published Date: 2024-12-03 18:00:00

Today we are announcing the preview launch of Amazon Bedrock IDE, a governed collaborative environment integrated within Amazon SageMaker Unified Studio (preview) that enables developers to swiftly build and tailor generative AI applications. It provides an intuitive interface for developers across various skill levels to access Amazon Bedrock's high-performing foundation models (FMs) and advanced customization capabilities in order to collaboratively build custom generative AI applications. Amazon Bedrock IDE's integration into Amazon SageMaker Unified Studio removes barriers between data, tools, and builders, for generative AI development. Teams can now access their preferred analytics and ML tools alongside Amazon Bedrock IDE's specialized tools for building generative AI applications. Developers can leverage Retrieval Augmented Generation (RAG) to create Knowledge Bases from their proprietary data sources, Agents for complex task automation, and Guardrails for responsible AI development. This unified workspace reduces complexity, accelerating the prototyping, iteration, and deployment of production-ready, responsible generative AI apps aligned with business needs. Amazon Bedrock IDE is now available in Amazon SageMaker Unified Studio and supported in 5 regions. For more information on supported regions, please refer to the Amazon SageMaker Unified Studio regions guide. Learn more about Amazon Bedrock IDE and its features by visiting the Amazon Bedrock IDE user guide and get started with Bedrock IDE by enabling a “Generative AI application development” project profile using this admin guide. ?

Amazon Q Business now provides insights from your databases and data warehouses (preview)

Published Date: 2024-12-03 18:00:00

Today, AWS announces the public preview of the integration between Amazon Q Business and Amazon QuickSight, delivering a transformative capability that unifies answers from structured data sources (databases, warehouses) and unstructured data (documents, wikis, emails) in a single application. Amazon Q Business is a generative AI–powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon QuickSight is a business intelligence (BI) tool that helps you visualize and understand your structured data through interactive dashboards, reports, and analytics. While organizations want to leverage generative AI for business insights, they experience fragmented access to unstructured and structured data. With the QuickSight integration, customers can now link their structured sources to Amazon Q Business through QuickSight’s extensive set of data source connectors. Amazon Q Business responds in real time, combining the QuickSight answer from your structured sources with any other relevant information found in documents. For example, users could ask about revenue comparisons, and Amazon Q Business will return an answer from PDF financial reports along with real-time charts and metrics from QuickSight. This integration unifies insights across knowledge sources, helping organizations make more informed decisions while reducing the time and complexity traditionally required to gather insights. This integration is available to all Amazon Q Business Pro, and Amazon QuickSight Reader Pro, and Author Pro users in the US East (N. Virginia) and US West (Oregon) AWS Regions. To learn more, visit the Amazon Q Business documentation site.

Announcing Amazon Aurora DSQL (Preview)

Published Date: 2024-12-03 18:00:00

Today, AWS announces the preview of Amazon Aurora DSQL, a new serverless, distributed SQL database with active-active high availability. Aurora DSQL allows you to build always available applications with virtually unlimited scalability, the highest availability, and zero infrastructure management. It is designed to make scaling and resiliency effortless for your applications, and offers the fastest distributed SQL reads and writes.

Aurora DSQL provides virtually unlimited horizontal scaling with the flexibility to independently scale reads, writes, compute, and storage. It automatically scales to meet any workload demand without database sharding or instance upgrades. Its active-active distributed architecture is designed for 99.99% single-Region and 99.999% multi-Region availability with no single point of failure, and automated failure recovery. This ensures that all reads and writes to any Regional endpoint are strongly consistent and durable. Aurora DSQL is PostgreSQL compatible, offering an easy-to-use developer experience.

Aurora DSQL is now available in preview in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon).?

To learn more about Aurora DSQL features and benefits, check out the Aurora DSQL overview page and documentation. Aurora DSQL is available at no charge during preview. Get started in only a few steps by going to the Aurora DSQL console or using the Aurora DSQL API or AWS CLI.

Announcing Amazon Nova foundation models available today in Amazon Bedrock

Published Date: 2024-12-03 18:00:00

We’re excited to announce Amazon Nova, a new generation of state-of-the-art (SOTA) foundation models (FMs) that deliver frontier intelligence and industry leading price performance. Amazon Nova models available today on Amazon Bedrock are:

  • Amazon Nova Micro, a text only model that delivers the lowest latency responses at very low cost.
  • Amazon Nova Lite, a very low-cost multimodal model that is lightning fast for processing image, video, and text inputs
  • Amazon Nova Pro, a highly capable multimodal model with the best combination of accuracy, speed, and cost for a wide range of tasks.
  • Amazon Nova Canvas, a state-of-the-art image generation model.
  • Amazon Nova Reel, a state-of-the-art video generation model.

Amazon Nova Micro, Amazon Nova Lite, and Amazon Nova Pro are among the fastest and most cost-effective models in their respective intelligence classes. These models have also been optimized to make them easy to use and effective in RAG and agentic applications. With text and vision fine-tuning on Amazon Bedrock, you can customize Amazon Micro, Lite, and Pro to deliver the optimal intelligence, speed, and cost for your needs. With Amazon Nova Canvas and Amazon Nova Reel, you get access to production-grade visual content, with built-in controls for safe and responsible AI use like watermarking and content moderation. You can see the latest benchmarks and examples of these models on the Amazon Nova product page. Amazon Nova foundation models are available in Amazon Bedrock in the US East (N. Virginia) region. Amazon Nova Micro, Lite, and Pro models are also available in the US West (Oregon), and US East (Ohio) regions via cross-region inference. Learn more about Amazon Nova at the AWS News Blog, the Amazon Nova product page, or the Amazon Nova user guide. You can get started with Amazon Nova foundation models in Amazon Bedrock from the Amazon Bedrock console.

Amazon Q Developer now provides transformation capabilities for .NET porting (Preview)

Published Date: 2024-12-03 18:00:00

Today, AWS announces new generative-AI powered transformation capabilities of Amazon Q Developer in public preview to accelerate porting of .NET Framework applications to cross-platform .NET. Using these capabilities, you can modernize your Windows .NET applications to be Linux-ready up to four times faster than traditional methods and realize up to 40% savings in licensing costs. With this launch, Amazon Q Developer is now equipped with agentic capabilities for transformation that allow you to port hundreds of .NET Frameworks applications running on Windows to Linux-ready cross-platform .NET. Using Amazon Q Developer, you can delegate your tedious manual porting tasks and help free up your team’s precious time to focus on innovation. You can chat with Amazon Q Developer in natural language to share high-level transformation objectives and connect it to your source code repositories. Amazon Q Developer then starts the transformation process with the assessment of your application code to identify .NET versions, supported project types, and their dependencies, and then ports the assessed application code along with their accompanying unit tests to cross-platform .NET. You and your team can collaboratively review, adjust, and approve the transformation process. Additionally, Amazon Q Developer provides a detailed work log as a documented trail of transformation decisions to support your organizational compliance objectives. The transformation capabilities of Amazon Q Developer are available in public preview via a web experience and in your Visual Studio integrated development environment (IDE). To learn more, read the blogs on the web experience and the IDE experience, and visit Amazon Q Developer transformation capabilities webpage and documentation. ?

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse

Published Date: 2024-12-03 18:00:00

Amazon DynamoDB zero-ETL integration with Amazon SageMaker Lakehouse automates the extracting and loading of data from a DynamoDB table into SageMaker Lakehouse, an open and secure lakehouse. You can run analytics and machine learning workloads on your DynamoDB data using SageMaker Lakehouse, without impacting production workloads running on DynamoDB.?With this launch, you now have the option to enable analytics workloads using SageMaker Lakehouse, in addition to the previously available Amazon OpenSearch Service and Amazon Redshift zero-ETL integrations.

Using the no-code interface, you can maintain an up-to-date replica of your DynamoDB data in the data lake by quickly setting up your integration to handle the complete process of replicating data and updating records. This zero-ETL integration reduces the complexity and operational burden of data replication to let you focus on deriving insights from your data. You can create and manage integrations using the AWS Management Console, the AWS Command Line Interface (AWS CLI), or the SageMaker Lakehouse APIs.

DynamoDB zero-ETL integration with SageMaker Lakehouse is now available in the US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Stockholm), Europe (Frankfurt), and Europe (Ireland) AWS Regions.?

To learn more, visit?DynamoDB integrations and read the documentation.

Amazon S3 Access Grants now integrate with AWS Glue

Published Date: 2024-12-03 18:00:00

Amazon S3 Access Grants now integrate with AWS Glue for analytics, machine learning (ML), and application development workloads in AWS. S3 Access Grants map identities from your Identity Provider (IdP), such as Entra ID and Okta or AWS Identity and Access Management (IAM) principals, to datasets stored in Amazon S3. This integration gives you the ability to manage S3 permissions for end users running jobs with Glue 5.0 or later, without the need to write and maintain bucket policies or individual IAM roles. AWS Glue provides a data integration service that simplifies data exploration, preparation, and integration from multiple sources, including S3. Using S3 Access Grants, you can grant permissions to buckets or prefixes in S3 to users and groups in an existing corporate directory, or to IAM users and roles. When end users in the appropriate user groups access S3 using Glue ETL for Apache Spark, they will then automatically have the necessary permissions to read and write data. S3 Access Grants also automatically update S3 permissions as users are added and removed from user groups in the IdP. Amazon S3 Access Grants support is available when using AWS Glue 5.0 and later, and is available in all commercial AWS Regions where AWS Glue 5.0 and AWS IAM Identity Center are available. For pricing details, visit Amazon S3 pricing and Amazon Glue pricing. To learn more about S3 Access Grants, refer to the S3 User Guide. ?

AWS expands data connectivity for Amazon SageMaker Lakehouse and AWS Glue

Published Date: 2024-12-03 18:00:00

Amazon SageMaker Lakehouse announces unified data connectivity capabilities to streamline the creation, management, and usage of connections to data sources across databases, data lakes and enterprise applications. SageMaker Lakehouse unified data connectivity provides a connection configuration template, support for standard authentication methods like basic authentication and OAuth 2.0, connection testing, metadata retrieval, and data preview. Customers can create SageMaker Lakehouse connections through SageMaker Unified Studio (preview), AWS Glue console, or custom-built application using APIs under AWS Glue. With SageMaker Lakehouse unified data connectivity, a data connection is configured once and can be reused by SageMaker Unified Studio, AWS Glue and Amazon Athena for use cases in data integration, data analytics and data science. You will gain confidence in the established connection by validating credentials with connection testing. With the ability to browse metadata, you can understand the structure and schema of the data source and identify relevant tables and fields. Lastly, the data preview capability supports mapping source fields to target schemas, identifying needed data transformation, and receiving immediate feedback on the source data queries. SageMaker Lakehouse unified connectivity is available where Amazon SageMaker Lakehouse or AWS Glue is available. To get started, visit AWS Glue connection documentation or the Amazon SageMaker Lakehouse data connection documentation.

Introducing AWS Glue 5.0

Published Date: 2024-12-03 18:00:00

Today, we are excited to announce the general availability of AWS Glue 5.0. With AWS Glue 5.0, you get improved performance, enhanced security, support for Amazon Sagemaker Unified Studio and Sagemaker Lakehouse, and more. AWS Glue 5.0 enables you to develop, run, and scale your data integration workloads and get insights faster. AWS Glue is a serverless, scalable data integration service that makes it simple to discover, prepare, move, and integrate data from multiple sources. AWS Glue 5.0 upgrades the engines to Apache Spark 3.5.2, Python 3.11, and Java 17, with new performance and security improvements. Glue 5.0 updates open table format support to Apache Hudi 0.15.0, Apache Iceberg 1.6.1, and Delta Lake 3.2.0 so you can solve advanced use cases around performance, cost, governance, and privacy in your data lakes. AWS Glue 5.0 adds Spark native fine grained access control with AWS Lake Formation so you can apply table, column, row, and cell level permissions on Amazon S3 data lakes. Finally, Glue 5.0 adds support for Sagemaker Lakehouse to unify all your data across Amazon S3 data lakes and Amazon Redshift data warehouses. AWS Glue 5.0 is generally available today in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (London), Europe (Stockholm), Europe (Frankfurt), Asia Pacific (Hong Kong), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), and South America (S?o Paulo) regions. To learn more, visit the AWS Glue product page and documentation.

Amazon SageMaker Lakehouse and Amazon Redshift support for zero-ETL integrations from eight applications

Published Date: 2024-12-03 18:00:00

Amazon SageMaker Lakehouse and Amazon Redshift?now support zero-ETL integrations from applications, automating the extraction and loading of data from eight applications, including Salesforce, SAP, ServiceNow, and Zendesk.?As an open, unified, and secure lakehouse for your analytics and AI initiatives, Amazon SageMaker Lakehouse enhances these integrations to streamline your data management processes.

These zero-ETL integrations are fully managed by AWS and minimize the need to build ETL data pipelines. With this new zero-ETL integration, you can efficiently extract and load valuable data from your customer support, relationship management, and ERP applications into your data lake and data warehouse for analysis.?Zero-ETL integration reduces users' operational burden and saves the weeks of engineering effort needed to design, build, and test data pipelines. By selecting a few settings in the no-code interface, you can quickly set up your zero-ETL integration to automatically ingest and continually maintain an up-to-date replica of your data in the data lake and data warehouse. Zero-ETL integrations help you focus on deriving insights from your application data, breaking down data silos in your organization and improving operational efficiency. Now run enhanced analysis on your application data using Apache Spark and Amazon Redshift for analytics or machine learning.?Optimize your data ingestion processes and focus instead on analysis and gaining insights.?

Amazon SageMaker Lakehouse and Amazon Redshift support for?zero-ETL integrations from eight applications is now generally available in the US East (N. Virginia), US East (Ohio), US West (Oregon),?Asia Pacific (Hong Kong), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Europe (Frankfurt), Europe (Ireland), and Europe (Stockholm).

You can create and manage integrations using either the AWS Glue console, the AWS Command Line Interface (AWS CLI), or the AWS Glue APIs. To learn more,?visit?What is zero-ETL and What is AWS Glue.

AWS announces Amazon SageMaker Lakehouse

Published Date: 2024-12-03 18:00:00

AWS announces Amazon SageMaker Lakehouse, a unified, open, and secure data lakehouse that simplifies your analytics and artificial intelligence (AI). Amazon SageMaker Lakehouse unifies all your data across Amazon S3 data lakes and Amazon Redshift data warehouses, helping you build powerful analytics and AI/ML applications on a single copy of data. SageMaker Lakehouse gives you the flexibility to access and query your data in-place with Apache Iceberg open standard. All data in SageMaker Lakehouse can be queried from SageMaker Unified Studio (preview) and engines such as Amazon EMR, AWS Glue, Amazon Redshift or Apache Spark. You can secure your data in the lakehouse by defining fine-grained permissions, which are consistently applied across all analytics and ML tools and engines. With SageMaker Lakehouse, you can use your existing investments. You can seamlessly make data from your Redshift data warehouses available for analytics and AI/ML. In addition, you can now create data lakes by leveraging the analytics optimized Redshift Managed Storage (RMS). Bringing data into lakehouse is easy. You can use zero-ETL to bring data from operational databases, streaming services, and applications, or query in-place data via federated query. SageMaker Lakehouse is available in US East (N. Virginia), US East (Ohio), Europe (Ireland), US West (Oregon), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (London), Asia Pacific (Sydney), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Singapore), Asia Pacific (Seoul), South America (Sao Paulo).

SageMaker Lakehouse is accessible directly from SageMaker Unified Studio. In addition, you can access SageMaker Lakehouse from?AWS Console,?AWS Glue APIs and CLIs. To learn more, visit SageMaker Lakehouse and read the launch blog. For pricing information please visit here.

Amazon Q Developer announces automatic unit test generation to accelerate feature development

Published Date: 2024-12-03 18:00:00

Today, Amazon Q Developer announces the general availability of a new agent that automates the process of generating unit tests. This agent can be easily initiated by using a simple prompt: “/test”. Once prompted, Amazon Q will use the knowledge of your project to automatically generate and add tests to your project, helping improve code quality, fast. Amazon Q Developer will also ask you to provide consent before adding tests, allowing you to always stay in the loop so that no unintended changes are made. Automation saves the time and effort needed to write comprehensive unit tests, allowing you to focus on building innovative features. With the ability to quickly add unit tests and increase coverage across code, organizations can safely and more reliably ship code, accelerating feature development across the software development lifecycle. Automatic unit test generation is generally available within the Visual Studio Code and JetBrains integrated development environments (IDEs) or in public preview as part of the new GitLab Duo with Amazon Q offering, in all AWS Regions where Amazon Q Developer is available. Learn more about unit test generation.

Amazon Bedrock now supports multi-agent collaboration

Published Date: 2024-12-03 18:00:00

Amazon Bedrock now supports multi-agent collaboration, allowing organizations to build and manage multiple AI agents that work together to solve complex workflows. This feature allows developers to create agents with specialized roles tailored for specific business needs, such as financial data collection, research, and decision-making. By enabling seamless agent collaboration, Amazon Bedrock empowers organizations to optimize performance across industries like finance, customer service, and healthcare.

With multi-agent collaboration on Amazon Bedrock, organizations can effortlessly master complex workflows, achieving highly accurate and scalable results across diverse applications. In financial services, for example, specialized agents coordinate to gather data, analyze trends, and provide actionable recommendations—working in parallel to improve response times and precision. This collaborative feature allows businesses to quickly build, deploy, and scale multi-agent setups, reducing development time while ensuring seamless integration and adaptability to evolving needs.

Multi-agent collaboration is currently available in the US East (N. Virginia), US West (Oregon), and Europe (Ireland) AWS Regions.

To learn more, visit Amazon Bedrock Agents.?

Introducing the next generation of Amazon SageMaker

Published Date: 2024-12-03 18:00:00

Today, AWS announces the next generation of Amazon SageMaker, a unified platform for data, analytics, and AI. This launch brings together widely adopted AWS machine learning and analytics capabilities and provides an integrated experience for analytics and AI with unified access to data and built-in governance. Teams can collaborate and build faster from a single development environment using familiar AWS tools for model development, generative AI application development, data processing, and SQL analytics, accelerated by Amazon Q Developer, the most capable generative AI assistant for software development.

The next generation of SageMaker also introduces new capabilities, including Amazon SageMaker Unified Studio (preview), Amazon SageMaker Lakehouse, and Amazon SageMaker Data and AI Governance. Within the new SageMaker Unified Studio, users can discover their data and put it to work using the best tool for the job across data and AI use cases. SageMaker Unified Studio brings together functionality and tools from the range of standalone studios, query editors, and visual tools available today in Amazon EMR, AWS Glue, Amazon Redshift, Amazon Bedrock, and the existing Amazon SageMaker Studio. SageMaker Lakehouse provides an open data architecture that reduces data silos and unifies data across Amazon Simple Storage Service (Amazon S3) data lakes, Amazon Redshift data warehouses, and third party and federated data sources. SageMaker Lakehouse offers the flexibility to access and query data with Apache Iceberg–compatible tools and engines. SageMaker Data and AI Governance, including Amazon SageMaker Catalog built on Amazon DataZone, empowers users to securely discover, govern, and collaborate on data and AI workflows. ?

For more information on AWS Regions where the next generation of Amazon SageMaker?is available, see Supported Regions.?

To learn more and get started, visit the following resources:

AWS Glue Data catalog now automates generating statistics for new tables

Published Date: 2024-12-03 18:00:00

AWS Glue Data Catalog now automates generating statistics for new tables. These statistics are integrated with cost-based optimizer (CBO) from Amazon Redshift and Amazon Athena, resulting in improved query performance and potential cost savings. Table statistics are used by a query engine, such as Amazon Redshift and Amazon Athena, to determine the most efficient way to execute a query. Previously, creating statistics for Apache Iceberg tables in AWS Glue Data Catalog required you to continuously monitor and update configurations for your tables. Now, AWS Glue Data Catalog lets you generate statistics automatically for new tables with one time catalog configuration. You can get started by selecting default catalog in the Lake Formation console and enabling table statistics in the table optimization configuration tab. As new tables are created or existing tables are updated, statistics are generated using a sample of rows for all columns and will be refreshed periodically. For Apache Iceberg tables, these statistics include the number of distinct values (NDVs). For other file formats like Parquet, additional statistics are collected, such as the number of nulls, maximum and minimum values, and average length. Amazon Redshift and Amazon Athena use the updated statistics to optimize queries, using optimizations such as optimal join order or cost based aggregation pushdown. Glue Catalog console provides you visibility into the updated statistics and statistics generation runs.

The support for automation for AWS Glue Catalog statistics is generally available in the following AWS regions: US East (N. Virginia, Ohio), US West (N. California, Oregon), Europe (Ireland), Asia Pacific (Tokyo) regions. Read the blog post and visit AWS Glue Catalog documentation to learn more. ?

Amazon Q Developer can now automate code reviews

Published Date: 2024-12-03 18:00:00

Starting today, Amazon Q Developer can also perform code reviews, automatically providing comments on your code in the IDE, flagging suspicious code patterns, providing patches where available, and even assessing deployment risk so you can get feedback on your code quickly. Q Developer is a generative AI-powered assistant for designing, building, testing, deploying, and maintaining software. Its agents for software development have a deep understanding of your entire code repos, so they can accelerate many tasks beyond coding. By automating the first round of code reviews and improving review consistency, Q Developer empowers code authors to fix issues faster, streamlining the process for both authors and reviewers. With this new capability, Q Developer can help you get immediate feedback for your code reviews and code fixes where available, so you can increase the speed of iteration and improve the quality of your code.

This capability is available in the integrated development environment (IDE) through a new chat command: /review. You can start automating code reviews via the Visual Studio Code and IntelliJ IDEA Integrated Development Environments (IDEs) with both an Amazon Q Developer Free Tier or Pro Tier subscription. For more details on pricing, see Amazon Q Developer pricing.?

This capability is available in all AWS Regions where Amazon Q Developer is available. To get started with generating documentation, visit Amazon Q Developer or read the news blog.

Amazon Bedrock Model Distillation is now available in preview

Published Date: 2024-12-03 18:00:00

With Amazon Bedrock Model Distillation, customers can use smaller, faster, more cost-effective models that deliver use-case specific accuracy that is comparable to the most capable models in Amazon Bedrock. Today, fine-tuning a smaller cost-efficient model to increase its accuracy for a customers’ use-case is an iterative process where customers need to write prompts and response, refine the training dataset, ensure that the training dataset captures diverse examples, and adjust the training parameters. Amazon Bedrock Model Distillation automates the process needed to generate synthetic data from the teacher model, trains and evaluates the student model, and then hosts the final distilled model for inference. To remove some of the burden of iteration, Model Distillation may choose to apply different data synthesis methods that are best suited for your use-case to create a distilled model that approximately matches the advanced model for the specific use-case. For example, Bedrock may expand the training dataset by generating similar prompts or generate high-quality synthetic responses using customer provided prompt-response pairs as golden examples. Learn more in our documentation and blog. ?

Amazon Q Business introduces over 50 actions for popular business applications and platforms

Published Date: 2024-12-03 18:00:00

Today, we are excited to announce that Amazon Q Business, including Amazon Q Apps, has expanded its capabilities with a ready-to-use library of over 50 actions spanning plugins across popular business applications and platforms. This enhancement allows Amazon Q Business users to complete tasks in other applications without leaving the Amazon Q Business interface, improving the user experience and operational efficiency. The new plugins cover a wide range of widely used business tools, including PagerDuty, Salesforce, Jira, Smartsheet, and ServiceNow. These integrations enable users to perform tasks such as creating and updating tickets, managing incidents, and accessing project information directly from within Amazon Q Business. With Amazon Q Apps, users can further automate their everyday tasks by leveraging the newly introduced actions directly within their purpose-built apps. The new plugins are available in all AWS Regions where Amazon Q Business is available. To get started with the new plugins, customers can access them directly from their Amazon Q Business interface. To learn more about Amazon Q Business plugins and how they can enhance your organization's productivity, visit the Amazon Q Business product page or explore the Amazon Q Business plugin documentation.

Amazon DynamoDB global tables previews multi-Region strong consistency

Published Date: 2024-12-03 18:00:00

Starting today in preview, Amazon DynamoDB global tables now supports multi-Region strong consistency. DynamoDB global tables is a fully managed, serverless, multi-Region, and multi-active database used by tens of thousands of customers.?With this new capability, you can now build?highly available multi-Region applications with a Recovery Point Objective (RPO) of zero, achieving the highest level of resilience.?

Multi-Region strong consistency ensures your applications can always read the latest version of data from any Region in a global table, removing the undifferentiated heavy lifting of managing consistency across multiple Regions. It is useful for building global applications with strict consistency requirements, such as user profile management, inventory tracking, and financial transaction processing.?

The preview of DynamoDB global tables with multi-Region strong consistency is available in the following Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon). DynamoDB global tables with multi-Region strong consistency is billed according to existing?global tables pricing. To learn more about global tables multi-Region strong consistency, see the preview documentation. For information about DynamoDB global tables, see the global tables information page?and the?developer guide.??

Amazon SageMaker Lakehouse integrated access controls now available in Amazon Athena federated queries

Published Date: 2024-12-03 18:00:00

Amazon SageMaker now supports connectivity, discovery, querying, and enforcing fine-grained data access controls on federated sources when querying data with Amazon Athena. Athena is a query service that makes it simple to analyze your data lake and federated data sources such as Amazon Redshift, Amazon DynamoDB, or Snowflake using SQL without extract, transform, and load (ETL) scripts. Now, data workers can connect to and unify these data sources within SageMaker Lakehouse. Federated source metadata is unified in SageMaker Lakehouse, where you apply fine-grained policies in one place, helping to streamline analytics workflows and secure your data. Log into Amazon SageMaker Unified Studio, connect to a federated data source in SageMaker Lakehouse, and govern data with column- and tag-based permissions that are enforced when querying federated data sources with Athena. In addition to the SageMaker Unified Studio, you can connect to these data sources through the Athena console and API. To help you automate and streamline connector set up, the new user experiences allow you to create and manage connections to data sources with ease. Now, organizations can extract insights from a unified set of data sources while strengthening security posture, wherever your data is stored. The unification and fine-grained access controls on federated sources are available in all AWS Regions where SageMaker Lakehouse is available. To learn more, visit SageMaker Lakehouse documentation.

Amazon Q Developer adds operational investigation capability (Preview)

Published Date: 2024-12-03 18:00:00

Amazon Q Developer now helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypothesis, and suggests next steps to help you remediate issues faster.?

Amazon Q Developer works alongside you throughout your operational troubleshooting journey from issue detection and triaging, through remediation. You can initiate an investigation by selecting the Investigate action on any Amazon CloudWatch data widget across the?AWS Management Console. You can also configure Amazon Q to automatically investigate when a CloudWatch alarm is triggered. When an investigation starts, Amazon Q Developer sifts through various signals about your AWS environment including CloudWatch telemetry, AWS CloudTrail Logs, deployment information, changes to resource configuration, and AWS Health events.?

CloudWatch now provides a dedicated investigation experience where teams can collaborate and add findings, view related signals and anomalies, and review suggestions for potential root cause hypotheses. This new capability also provides remediation suggestions for common operational issues across your AWS environment by surfacing relevant AWS Systems Manager Automation runbooks, AWS re:Post articles, and documentation. It also integrates with your existing operational workflows such as Slack via AWS Chatbot.?

The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview in the US East (N. Virginia) Region. To learn more, see getting started and best practice documentation.?

Introducing Amazon SageMaker Data and AI Governance

Published Date: 2024-12-03 18:00:00

Today, AWS announces Amazon SageMaker Data and AI Governance, a new capability that simplifies discovery, governance, and collaboration for data and AI across your lakehouse, AI models, and applications. Built on Amazon DataZone, SageMaker Data and AI Governance allows engineers, data scientists, and analysts to securely discover and access approved data and models using semantic search with generative AI–created metadata. This new offering helps organizations consistently define and enforce access policies using a single permission model with fine-grained access controls. With SageMaker Data and AI Governance, you can accelerate data and AI discovery and collaboration at scale. You can enhance data discovery by automatically enriching your data and metadata with business context using generative AI, making it easier for all users to find, understand, and use data. You can share data, AI models, prompts, and other generative AI assets with filtering by table and column names or business glossary terms. SageMaker Data and AI Governance helps establish trust and drives transparency in your data pipelines and AI projects with built-in model monitoring to detect bias and report on how features contribute to your model predictions.

To learn more about how to govern your data and AI assets, visit SageMaker Data and AI Governance.

Announcing Amazon S3 Metadata (Preview) – Easiest and fastest way to manage your metadata

Published Date: 2024-12-03 18:00:00

Amazon S3 Metadata is the easiest and fastest way to help you instantly discover and understand your S3 data with automated, easily-queried metadata that updates in near real-time. This helps you to curate, identify, and use your S3 data for business analytics, real-time inference applications, and more. S3 Metadata supports object metadata, which includes system-defined details like size and the source of the object, and custom metadata, which allows you to use tags to annotate your objects with information like product SKU, transaction ID, or content rating, for example. S3 Metadata is designed to automatically capture metadata from objects as they are uploaded into a bucket, and to make that metadata queryable in a read-only table. As data in your bucket changes, S3 Metadata updates the table within minutes to reflect the latest changes. These metadata tables are stored in S3 Tables, the new S3 storage offering optimized for tabular data. S3 Tables integration with AWS Glue Data Catalog is in preview, allowing you to stream, query, and visualize data—including S3 Metadata tables—using AWS Analytics services such as Amazon Data Firehose, Athena, Redshift, EMR, and QuickSight. Additionally, S3 Metadata integrates with Amazon Bedrock, allowing for the annotation of AI-generated videos with metadata that specifies its AI origin, creation timestamp, and the specific model used for its generation. Amazon S3 Metadata is currently available in preview in the US East (N. Virginia), US East (Ohio), and US West (Oregon) Regions, and coming soon to additional Regions. For pricing details, visit the S3 pricing page. To learn more, visit the product page, documentation, and AWS News Blog.

Amazon Bedrock Guardrails now supports Automated Reasoning checks (Preview)

Published Date: 2024-12-03 18:00:00

With the launch of the Automated Reasoning checks safeguard in Amazon Bedrock Guardrails, AWS becomes the first and only major cloud provider to integrate automated reasoning in our generative AI offerings. Automated Reasoning checks help detect hallucinations and provide a verifiable proof that a large language model (LLM) response is accurate. Automated Reasoning tools are not guessing or predicting accuracy. Instead, they rely on sound mathematical techniques to definitively verify compliance with expert-created Automated Reasoning Policies, consequently improving transparency. Organizations increasingly use LLMs to improve user experiences and reduce operational costs by enabling conversational access to relevant, contextualized information. However, LLMs are prone to hallucinations. Due to the ability of LLMs to generate compelling answers, these hallucinations are often difficult to detect. The possibility of hallucinations and an inability to explain why they occurred slows generative AI adoption for use cases where accuracy is critical. With Automated Reasoning checks, domain experts can more easily build specifications called Automated Reasoning Policies that encapsulate their knowledge in fields such as operational workflows and HR policies. Users of Amazon Bedrock Guardrails can validate generated content against an Automated Reasoning Policy to identify inaccuracies and unstated assumptions, and explain why statements are accurate in a verifiable way. For example, you can configure Automated Reasoning checks to validate answers on topics defined in complex HR policies (which can include constraints on employee tenure, location, and performance) and explain why an answer is accurate with supporting evidence. Contact your AWS account team to request access to Automated Reasoning checks in Amazon Bedrock Guardrails in US West (Oregon) AWS regions. To learn more, visit Amazon Bedrock Guardrails and read the News blog. ?

Introducing latency-optimized inference for foundation models in Amazon Bedrock

Published Date: 2024-12-02 18:00:00

Latency-optimized inference for foundation models in Amazon Bedrock now available in public preview, delivering faster response times and improved responsiveness for AI applications. Currently, these new inference options support Anthropic's Claude 3.5 Haiku model and Meta's Llama 3.1 405B and 70B models offering reduced latency compared to standard models without compromising accuracy. As verified by Anthropic, with latency-optimized inference in Amazon Bedrock, Claude 3.5 Haiku runs faster on AWS than anywhere else. Additionally, with latency-optimized inference in Bedrock, Llama 3.1? 405B and 70B runs faster on AWS than any other major cloud provider. As more customers move their generative AI applications to production, optimizing the end-user experience becomes crucial, particularly for latency-sensitive applications such as real-time customer service chatbots and interactive coding assistants. Using purpose-built AI chips like AWS Trainium2 and advanced software optimizations in Amazon Bedrock, customers can access more options to optimize their inference for a particular use case. Accessing these capabilities requires no additional setup or model fine-tuning, allowing for immediate enhancement of existing applications with faster response times. Latency-optimized inference is available for Anthropic’s Claude 3.5 Haiku and Meta’s Llama 3.1 405B and 70B in the US East (Ohio) Region?via cross-region inference. To get started, visit the Amazon Bedrock console. For more information about Amazon Bedrock and its capabilities, visit the Amazon Bedrock product page, pricing page, and documentation.

VPC Lattice now includes TCP support with VPC Resources

Published Date: 2024-12-02 18:00:00

With the launch of VPC Resources for Amazon VPC Lattice, you can now access all of your application dependencies through a VPC Lattice service network. You're able to connect to your application dependencies hosted in different VPCs, accounts, and on-premises using additional protocols, including TLS, HTTP, HTTPS, and now TCP. This new feature expands upon the existing HTTP-based services support, enabling you to share a wider range of resources across your organization. With VPC Resource support, you can add your TCP resources, such as Amazon RDS databases, custom DNS, or IP endpoints, to a VPC Lattice service network. Now, you can share and connect to all your application dependencies, such as HTTP APIs and TCP databases, across thousands of VPCs, simplifying network management and providing centralized visibility with built-in access controls. VPC Resources are generally available with VPC Lattice in Africa (Cape Town), Asia Pacific (Mumbai), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Ireland), Europe (Paris), Europe (Stockholm), Middle East (Bahrain), South America (Sao Paulo), US East (N. Virginia), US East (Ohio), US West (Oregon). To get started, read the VPC Resources launch blog, architecture blog, and VPC Lattice User Guide. Learn more about VPC Lattice, visit Amazon VPC Lattice Getting Started. ?

Deploy GROW with SAP on AWS from AWS Marketplace

Published Date: 2024-12-02 18:00:00

GROW with SAP on AWS is now available for subscription from AWS Marketplace. As a complete offering of solutions, best practices, adoption acceleration services, community and learning, GROW with SAP helps any size organization adopt cloud enterprise resource planning (ERP) with speed, predictability, and continuous innovation. GROW with SAP on AWS can be implemented in months instead of years with traditional on-premises ERP implementations.

By implementing GROW with SAP on AWS, you can simplify everyday work, grow your business, and secure your success. At the core of GROW with SAP is SAP S/4HANA Cloud, a full-featured SaaS ERP suite built on the learnings of SAP’s 50+ years of industry best practices. GROW with SAP allows your organization to gain end- to- end process visibility and control with integrated systems across HR, procurement, sales, finance, supply chain, and manufacturing. It also includes SAP Business AI-powered processes leveraging AWS to provide data-driven insights and recommendations. Customers can also innovate with generative AI using their SAP data through Amazon Bedrock models in the SAP generative AI hub. GROW with SAP on AWS takes advantage of AWS Graviton processors, which offer up to 60% less energy than comparable cloud instances for the same performance.

GROW with SAP on AWS is initially available in the US-East Region.

To subscribe to GROW with SAP on AWS, visit the AWS Marketplace listing. Or, to learn more, visit the GROW with SAP on AWS detail page.

Amazon EC2 P5en instances, optimized for generative AI and HPC, are generally available

Published Date: 2024-12-02 18:00:00

Today, AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) P5en instances, powered by the latest NVIDIA H200 Tensor Core GPUs. These instances deliver the highest performance in Amazon EC2 for deep learning and high performance computing (HPC) applications. You can use Amazon EC2 P5en instances for training and deploying increasingly complex large language models (LLMs) and diffusion models powering the most demanding generative AI applications. You can also use P5en instances to deploy demanding HPC applications at scale in pharmaceutical discovery, seismic analysis, weather forecasting, and financial modeling. P5en instances feature up to 8 H200 GPUs which have 1.7x GPU memory size and 1.5x GPU memory bandwidth than H100 GPUs featured in P5 instances. P5en instances pair the H200 GPUs with high performance custom 4th Generation Intel Xeon Scalable processors, enabling Gen5 PCIe between CPU and GPU which provides up to 4x the bandwidth between CPU and GPU and boosts AI training and inference performance. P5en, with up to 3200 Gbps of third generation of EFA using Nitro v5, shows up to 35% improvement in latency compared to P5 that uses the previous generation of EFA and Nitro. This helps improve collective communications performance for distributed training workloads such as deep learning, generative AI, real-time data processing, and high-performance computing (HPC) applications. To address customer needs for large scale at low latency, P5en instances are deployed in Amazon EC2 UltraClusters, and provide market-leading scale-out capabilities for distributed training and tightly coupled HPC workloads. P5en instances are now available in the US East (Ohio), US West (Oregon), and Asia Pacific (Tokyo) AWS Regions and US East (Atlanta) Local Zone us-east-1-atl-2a in the p5en.48xlarge size. To learn more about P5en instances, see Amazon EC2 P5en Instances.

Jyotiraditya Sahoo??

E-Commerce Manager | B2C Strategy & Optimization | Ads & Listing Expert | Risk Management Specialist

3 周

These AWS updates are groundbreaking! From scaling Generative AI with SageMaker to enhancing CloudWatch visibility, it’s clear that the cloud ecosystem is evolving rapidly. As an e-commerce specialist, leveraging these tools for automation and personalized customer engagement could drive massive growth. Let’s connect to explore more! #AWS #EcommerceTech #CloudInnovation

回复

要查看或添加评论,请登录

Ankur Patel的更多文章

  • 2025 - Week 10 (3 Mar - 9 Mar)

    2025 - Week 10 (3 Mar - 9 Mar)

    Amazon Athena Provisioned Capacity now available in the Asia Pacific (Mumbai) Region Published Date: 2025-03-07…

  • 2025 - Week 9 (24 Feb - 2 Mar)

    2025 - Week 9 (24 Feb - 2 Mar)

    Amazon Connect launches the ability for agents to exchange shifts with each other Published Date: 2025-02-28 22:10:00…

  • 2025 - Week 8 (17 Feb - 23 Feb)

    2025 - Week 8 (17 Feb - 23 Feb)

    Certificate-Based Authentication is now available on Amazon AppStream 2.0 multi-session fleets Published Date:…

  • 2025 - Week 7 (10 Feb - 16 Feb)

    2025 - Week 7 (10 Feb - 16 Feb)

    Amazon SES now offers tiered pricing for Virtual Deliverability Manager Published Date: 2025-02-14 19:30:00 Today…

  • 2025 - Week 6 (3 Feb - 9 Feb)

    2025 - Week 6 (3 Feb - 9 Feb)

    AWS Step Functions expands data source and output options for Distributed Map Published Date: 2025-02-07 22:50:00 AWS…

  • 2025 - Week 5 (27 Jan - 2 Feb)

    2025 - Week 5 (27 Jan - 2 Feb)

    AWS Transfer Family web apps are now available in 20 additional Regions Published Date: 2025-01-31 21:25:00 AWS…

  • 2025 - Week 4 (20 Jan - 26 Jan)

    2025 - Week 4 (20 Jan - 26 Jan)

    AWS announces new edge location in the Kingdom of Saudi Arabia Published Date: 2025-01-24 22:40:00 Amazon Web Services…

  • 2025 - Week 3 (13 Jan - 19 Jan)

    2025 - Week 3 (13 Jan - 19 Jan)

    AWS CodeBuild now supports test splitting and parallelism Published Date: 2025-01-17 22:50:00 You can now split your…

  • 2025 - Week 2 (6 Jan - 12 Jan)

    2025 - Week 2 (6 Jan - 12 Jan)

    Amazon Connect Contact Lens launches agent performance evaluations for email contacts Published Date: 2025-01-10…

  • Kickstarting 2025 - Week 1 (30 Dec - 5 Jan)

    Kickstarting 2025 - Week 1 (30 Dec - 5 Jan)

    Amazon FSx for NetApp ONTAP is now available in the AWS Asia Pacific (Malaysia) Region Published Date: 2025-01-03…

社区洞察

其他会员也浏览了