license: cdla-permissive-2.0
task_categories:
- text-generation
- text2text-generation
- text-retrieval
language:
- en
tags:
- query-autocomplete
- amazon
- large-scale
- ecommerce
- search
- session-based
pretty_name: AmazonQAC
size_categories:
- 100M<n<1B
configs:
- config_name: default
data_files:
- split: train
path: train/*.parquet
- split: test
path: test/*.parquet
AmazonQAC: A Large-Scale, Naturalistic Query Autocomplete Dataset
Train Dataset Size: 395 million samples
Test Dataset Size: 20k samples
Source: Amazon Search Logs
File Format: Parquet
Compression: Snappy
Dataset Summary
AmazonQAC is a large-scale dataset designed for Query Autocomplete (QAC) tasks, sourced from real-world Amazon Search logs. It provides anonymized sequences of user-typed prefixes leading to final search terms, along with rich session metadata such as timestamps and session IDs. This dataset supports research on context-aware query completion by offering realistic, large-scale, and natural user behavior data.
QAC is a widely used feature in search engines, designed to predict users' full search queries as they type. Despite its importance, research progress has been limited by the lack of realistic datasets. AmazonQAC aims to address this gap by providing a comprehensive dataset to spur advancements in QAC systems. AmazonQAC also contains a realistic test set for benchmarking of different QAC approaches, consisting of past_search, prefix and final search term rows (mimics a real QAC service).
Key Features:
Train:
- 395M samples: Each sample includes the user’s search term and the sequence of prefixes they typed. Collected from 2023-09-01 to 2023-09-30 from US logs
- Session Metadata: Includes session IDs and timestamps for context-aware modeling.
- Naturalistic Data: Real user interactions are captured, including non-linear typing patterns and partial prefix matches.
- Popularity Information: Popularity of search terms is included as metadata.
Test:
- 20k samples: Each sample includes a prefix and the user’s final search term. Collected from 2023-10-01 to 2023-10-14 (after the train data time period) from US logs
- Session Metadata: Each sample also contains an array of the user's past search terms for input to context-aware QAC systems
- Naturalistic Data: Each row is randomly sampled prefix/search term/context from search logs (no sequence of past typed prefixes, etc), mimicking the asynchronous nature of a real-world QAC service
Dataset Structure
Train:
Each data entry consists of:
query_id
:long
A unique identifier for each row/user search.session_id
:string
The user session ID.prefixes
:array<string>
A sequence of prefixes typed by the user in order.first_prefix_typed_time
:string (YYYY-MM-DD HH:MM:SS.sss)
The timestamp when the first prefix was typed.final_search_term
:string
The final search term searched for by the user.search_time
:string (YYYY-MM-DD HH:MM:SS)
The timestamp of the final search.popularity
:long
The number of occurrences of the search term before filtering.
Test:
Each data entry consists of:
query_id
:long
A unique identifier for each row/user search.session_id
:string
The user session ID.past_search_terms
:array<array<string>>
A sequence of past search terms from the user in order along with each search term's timestampprefix
:string
Prefix typed by the userprefix_typed_time
:string (YYYY-MM-DD HH:MM:SS.sss)
The timestamp when the prefix was typed.final_search_term
:string
The final search term searched for by the user.search_time
:string (YYYY-MM-DD HH:MM:SS)
The timestamp of the final search term.
Example
Train
{
"query_id": "12",
"session_id": "354",
"prefixes": ["s", "si", "sin", "sink", "sink r", "sink ra", "sink rac", "sink rack"],
"first_prefix_typed_time": "2023-09-04T20:46:14.293Z",
"final_search_term": "sink rack for bottom of sink",
"search_time": "2023-09-04T20:46:27",
"popularity": 125
}
Test
{
"query_id": "23",
"session_id": "783",
"past_search_terms": [["transformers rise of the beast toys", "2023-10-07 13:03:54"], ["ultra magnus", "2023-10-11 11:54:44"]],
"prefix": "transf",
"prefix_typed_time": "2023-10-11T16:42:30.256Z",
"final_search_term": "transformers legacy",
"search_time": "2023-10-11 16:42:34"
}
Dataset Statistics
Statistic | Train Set | Test Set |
---|---|---|
Total Prefixes | 4.28B | 20K |
Unique Prefixes | 384M | 15.1K |
Unique Search Terms | 40M | 16.7K |
Unique Prefix/Search Term Pairs | 1.1B | 19.9K |
Average Prefix Length | 9.5 characters | 9.2 characters |
Average Search Term Length | 20.0 characters | 20.3 characters |
Searches per Session | 7.3 | 10.3 |
Train/Test Overlap: Unique Prefixes | 13.4k | 88% |
Train/Test Overlap: Unique Search Terms | 12.3k | 74% |
Train/Test Overlap: Unique Prefix/Search Term Pairs | 11.7k | 59% |
Evaluation Metrics
The dataset is evaluated using the following core metrics:
- Success@10: Of the 10 suggestions a QAC system provides, whether the correct search term is contained in them
- Reciprocal Rank@10: Of the 10 suggestions a QAC systems provides, 1/rank if the correct term is present otherwise 0 The means for each is calculated across the test dataset.
Ethical Considerations
All data has been anonymized, and personally identifiable information (PII) has been removed using regex filters and LLM-based fileter. The dataset is also restricted to search terms which appeared at least 4 times in 4 different sessions in order to help ensure they are not user specific.
The dataset is derived from U.S. Amazon search logs, so it reflects a specific cultural and linguistic context, which may not generalize to all search environments.