Customer-obsessed science
![Amazon Science homepage.jpeg](https://cdn.statically.io/img/assets.amazon.science/dims4/default/44d6323/2147483647/strip/true/crop/1383x1200+208+0/resize/400x347!/quality/90/?url=http%3A%2F%2Famazon-topics-brightspot.s3.amazonaws.com%2Fscience%2F48%2F0f%2F1db2f1004b82a99a0175ff391d53%2Famazon-science-homepage.jpeg)
![Amazon Science Fulfillment Center OAK4 in Tracy, CA](https://cdn.statically.io/img/assets.amazon.science/dims4/default/15069c5/2147483647/strip/true/crop/1254x1091+191+0/resize/200x174!/quality/90/?url=http%3A%2F%2Famazon-topics-brightspot.s3.amazonaws.com%2Fscience%2F70%2Fbe%2F94c4a60445f999ef19050df7cad2%2Famazon-science-homepage-box.jpeg)
-
June 13, 2024The fight against hallucination in retrieval-augmented-generation models starts with a method for accurately assessing it.
-
June 13, 2024As in other areas of AI, generative models and foundation models — such as vision-language models — are a hot topic.
-
June 07, 2024Although work involving large language models predominates, classical and more-general techniques remain well represented.
-
-
July 14 - 18, 2024
-
July 21 - 27, 2024
-
August 11 - 16, 2024
-
February 15, 2024
In addition to its practical implications, recent work on “meaning representations” could shed light on some old philosophical questions.
-
April 16, 2024First model to work across a wide range of products uses a second U-Net encoder to capture fine-grained product details.
-
March 18, 2024Tokenizing time series data and treating it like a language enables a model whose zero-shot performance matches or exceeds that of purpose-built models.
-
February 20, 2024Generative AI supports the creation, at scale, of complex, realistic driving scenarios that can be directed to specific locations and environments.
-
January 17, 2024Representing facts using knowledge triplets rather than natural language enables finer-grained judgments.
-
2024In this position paper, we argue that human evaluation of generative large language models (LLMs) should be a multidisciplinary undertaking that draws upon insights from disciplines such as user experience research and human behavioral psychology to ensure that the experimental design and results are reliable. The conclusions from these evaluations, thus, must consider factors such as usability, aesthetics
-
ACL Findings 20242024We show that content on the web is often trans-lated into many languages, and the low quality of these multi-way translations indicates they were likely created using Machine Translation (MT). Multi-way parallel, machine generated content not only dominates the translations in lower resource languages; it also constitutes a large fraction of the total web content in those languages. We also find evidence
-
Transactions on Machine Learning Research2024Model miscalibration has been frequently identified in modern deep neural networks. Recent work aims to improve model calibration directly through a differentiable calibration proxy. However, the calibration produced is often biased due to the binning mechanism. In this work, we propose to learn better-calibrated models via meta-regularization, which has two components: (1) gamma network (γ-Net), a meta
-
Pixel-level mask annotation costs are a major bottleneck in training deep neural networks for instance segmentation. Recent promptable foundation models like the Segment Anything Model (SAM) and GroundedDINO (GDino) have shown impressive zero-shot performance in segmentation and object detection benchmarks. While these models are not capable of performing inference without prompts, they are ideal for omnisupervised
-
Interspeech 20242024End-to-end (E2E) automatic speech recognition (ASR) systems often exploited pre-trained hidden Markov model (HMM) systems for word timing estimation (WTE), due to their inability to predict word boundaries. However, training an HMM is difficult for low-resource languages due to the lack of phonetic transcriptions, leading to a high demand for HMM-free WTE methods, particularly for multilingual ASR systems
Resources
-
We look for talent from around the world for applied scientists, data scientists, economists, research scientists, scholars, academics, PhDs, and interns.
-
We hire world-class academics to work on large-scale technical challenges, while they continue to teach and conduct research at their universities. Learn more about each program and how to apply below.
-
Supporting research at academic institutions and non-profit organizations in areas that align with our mission to advance customer-obsessed science.