In this month's newsletter we cover Amazon's open-source model for time series forecasting, a new knowledge graph framework that uses LLMs to discern commonsense relationships, research collaborations with UW and Columbia, new features for Amazon Bedrock, and more. Be sure to check out our latest career opportunities, which includes internships and programs for academics.
- Adapting language model architectures for time series forecasting: Amazon researchers released the training code for Chronos, a family of pretrained, open-source models for time series forecasting, which are built on a language model architecture and trained with billions of tokenized time series observations to provide accurate zero-shot forecasts for new time series.
- Evaluating the helpfulness of AI-enhanced catalogue data: The Amazon Catalog team uses generative AI to make product information more useful and A/B testing to evaluate enriched data. Two team members explain how causal random forests and Bayesian structural time series help them extrapolate from sparse A/B data.
- Building commonsense knowledge graphs to aid product recommendation: To improve Amazon’s recommendation engine, Amazon researchers are building a knowledge graph that encodes commonsense relationships between products and queries. At SIGMOD/PODS 2024, they'll present COSMO, a framework that uses LLMs to extract those relationships.
- More reliable nearest-neighbor search with deep metric learning: Deep metric learning is a powerful tool, but it yields inconsistent distances between data embeddings, which can hamper nearest-neighbor search. At this year's ICLR, Amazon researchers showed how to make distances more consistent, improving model performance.
- Generalizing diffusion modeling to multimodal, multitask settings: At this year's ICLR, Amazon scientists showed how to generalize diffusion models to multimodal, multitask settings. The keys are a loss term that induces the model to recover modality in the reverse process and a way to aggregate inputs of different modalities.
- Adapting neural radiance fields (NeRFs) to dynamic scenes: At AAAI, Amazon scientists introduced?a novel approach?that significantly advances our ability to capture and model scenes with complex dynamics. Their work not only addresses previous limitations but also opens doors to new applications ranging from virtual reality to digital preservation.
- How Project P.I. helps Amazon remove imperfect products: Find out how Amazon is using generative AI and computer vision to process multimodal information by synthesizing evidence from images captured during the fulfillment process and combining it with written customer feedback to help uncover both defects and, wherever possible, their cause — to address issues at the root before a product reaches the customer.
- New circuit boards can be repeatedly recycled: With the support of an Amazon Research Award, a team of researchers led by the
美国华盛顿大学
have developed a new printed circuit board (PCB) that performs on par with traditional materials and can be recycled repeatedly with negligible material loss. Learn more about the team’s research, which was published in Nature Sustainability.
- At USC + Amazon Center Symposium, Speakers Highlight Ties Between University and Industry: Amazon and the
美国南加州大学
hosted their third annual symposium for the USC-Amazon Center on Trustworthy AI, a research collaboration between USC faculty and students and Amazon scientists and engineers, which launched in 2021 and focuses on the development of new approaches to machine learning privacy, security, and trustworthiness. The event featured presentations on the Center's recently announced?research projects, and a poster competition.
- This robot predicts when you're going to smile – and smiles back: In a new study published in Science Robotics, researchers introduce a robot which can anticipate facial expressions and execute them simultaneously with a human. The robot has even learned to predict a forthcoming smile about 840 milliseconds before the person smiles, and to co-express the smile simultaneously with the person. The work was supported by the National Science Foundation (NSF), and Amazon through the Center of AI Technology (CAIT) at Columbia University.
- Penn Engineering Ph.D. students receive funding from Amazon to advance trustworthy AI: To support the responsible development and regulation of AI tools and the next generation of engineers actualizing it,
Amazon Web Services (AWS)
is funding 10 Ph.D. student research projects at
Penn Engineering
that focus on advancing safe and responsible AI.
- 98 Amazon Research Awards recipients announced: The recipients, representing 51 universities in 15 countries, will have access to Amazon datasets, AWS AI/ML services and tools, and more. The announcement includes awards funded under six call for proposals during the fall 2023 cycle: AI for Information Security, Automated Reasoning, AWS AI, AWS Cryptography and Privacy, AWS Database Services, and Sustainability.
- Registration opens for Amazon's ML Summer School in India: The fourth?edition of Amazon's ML Summer School is open to all eligible students from recognized institutes in India who are expected to graduate in 2025 or 2026. The program offers an intensive course on key ML topics, and the opportunity to learn from and interact with Amazon scientists, to help students prepare for a career in machine learning.
- Significant new capabilities make it easier to use Amazon Bedrock to build and scale generative AI applications: "With Amazon Bedrock, we’re focused on the key areas that customers need to build production-ready, enterprise-grade generative AI applications at the right cost and speed. Today I’m excited to share new features that we’re announcing across the areas of model choice, tools for building generative AI applications, and privacy and security."
Swami Sivasubramanian
, VP of Data and Machine Learning at AWS.
- How Amazon is harnessing solar energy, batteries, and AI to help decarbonize the grid: At Baldy Mesa, a solar farm enabled by Amazon, and developed, owned, and operated by
The AES Corporation
, machine learning models powered by AWS are helping predict when and how the project’s battery unit should charge and discharge energy back to the grid.
- A look at Fire TV's decade of innovation: From voice search to AI-powered entertainment: From the very first Fire TV device to the new AI-powered Fire TV Search experience, learn how Fire TV continues to innovate for customers after 10 years.
- Amazon and Meta join the Frontier Model Forum to promote AI safety: "Building AI that our customers can trust is one of the most important scientific challenges of our time. I’m proud to share that Amazon has joined the
Frontier Model Forum
to work with other industry leaders and the government to advance AI safely and securely." Rohit Prasad, SVP and Head Scientist, Artificial General Intelligence at Amazon.
- Navigating the scientist-to-manager transition: Join Amazon researchers on June 11 for an online panel about managing science teams, featuring Amazon science managers Federica Cerina, Martin Gross, and Mauro Piacentini, who will share their personal journeys, key lessons learned, and strategies for successfully bridging the gap from individual contributor to people leader.
- Amazon wins Best Student Paper award: At this year's
IEEE International Conference on Acoustics, Speech, and Signal Processing
(ICASSP 2024), Amazon researchers won the Best Student Paper award for their publication, 'Significant ASR error detection for conversational voice assistants', which proposes a system that can determine, to a high degree of accuracy, whether the semantics of a predicted and reference transcript are significantly different.
Learn more about Amazon's presence at the following conferences:
? 1996-2024 Amazon.com, Inc. or its affiliates | Privacy | Conditions of Use
Healthcare AI Product Leader | Demystify and Democratize AI | Nurturing Sustainable Value: A Servant’s Approach to Digital Excellence | Creative Catalyst | Curious Compassion | Bring Augmented Intelligence to Life
8 个月The tokenization concepts explored in Chronos are particularly intriguing. The superior zero shot performance against task-specific models is very encouraging, as is the incremental improvement through fine tuning on domain data. It will be interesting to understand how this method interacts with hallucination and over-fit challenges commonly seen on the language side when fine tuning on smaller / divergent datasets. Thank you, Amazon Science for this insightful and engaging work!
Principal Digital Marketing Manager, Amazon
9 个月????
AUTO PARTS
9 个月Thanks for sharing??????
I can help with Talent Acquisition across India and Africa, backed by over 17 years of Recruitment Experience | Top Rated Mentor on Topmate and Unstop | Resume Writer | Podcast Host "Expert Talk by Vipul The Wonderful"
9 个月Good to know!