|
--- |
|
language: |
|
- en |
|
size_categories: |
|
- 100K<n<1M |
|
license: cc-by-nc-2.0 |
|
--- |
|
|
|
|
|
# MTF 2025 VLM Dishcovery Challenge Dataset - Web Split |
|
|
|
## Overview |
|
|
|
This dataset is part of the training data for the **CVPR Workshop Metafood 2025 (MTF 2025) Dishcovery VLM Challenge**. It consists of image-text pairs where the images are sourced from the web. |
|
|
|
The dataset has been carefully curated using the **[Precision at Scale: Domain-Specific Datasets On-Demand](https://arxiv.org/abs/2407.03463)** method, ensuring high relevance and quality for domain-specific tasks. |
|
|
|
**Associated Hugging Face Repository:** `jesusmolrdv/MTF25-VLM-Challenge-Dataset-Web` |
|
|
|
## Dataset Description |
|
|
|
* **Source:** Web-scraped images. |
|
* **Format:** The dataset hosted on Hugging Face contains a table (similar to a Pandas DataFrame) with two columns: |
|
* `url`: The URL of the original food image. |
|
* `caption`: A textual description associated with the image. |
|
* **Curation:** The selection of image-text pairs was performed using the methodology described in the "Precision at Scale" paper (see Citation section). |
|
|
|
## How to Use / Download |
|
|
|
Since this dataset split contains URLs to images rather than the images themselves, you need to download the images using a tool like [`img2dataset`](https://github.com/rom1504/img2dataset/). |
|
|
|
1. **Install dependencies:** |
|
```bash |
|
pip install img2dataset datasets pyarrow pandas |
|
``` |
|
|
|
2. **Prepare the URL list:** |
|
First, download the dataset metadata (which contains the URLs and captions) from Hugging Face and save it as a local file (e.g., Parquet format). Run this short Python script: |
|
|
|
```python |
|
from datasets import load_dataset |
|
import os |
|
|
|
hf_dataset_name = "jesusmolrdv/MTF25-VLM-Challenge-Dataset-Web" |
|
metadata_output_file = "mtf2025_web_metadata.parquet" # Output file for img2dataset |
|
|
|
print(f"Loading dataset metadata: {hf_dataset_name}") |
|
# Ensure you load the correct split, default is often 'train' |
|
dataset = load_dataset(hf_dataset_name, split="train") |
|
|
|
print(f"Saving metadata to {metadata_output_file}...") |
|
# Save in Parquet format, which img2dataset can read efficiently |
|
dataset.to_parquet(metadata_output_file) |
|
|
|
print("Metadata file saved successfully.") |
|
|
|
``` |
|
This script will create a file named `mtf2025_web_metadata.parquet` in the directory where you run it. This file contains the `url` and `caption` columns needed by `img2dataset`. |
|
|
|
3. **Download images using `img2dataset` CLI:** |
|
Now, use the `img2dataset` command in your terminal. Adjust parameters like `output_folder`, `image_size`, and `processes_count` as needed. |
|
|
|
```bash |
|
img2dataset \ |
|
--url_list mtf2025_web_metadata.parquet \ |
|
--input_format "parquet" \ |
|
--url_col "url" \ |
|
--caption_col "caption" \ |
|
--output_format webdataset \ |
|
--output_folder ./mtf2025_web_images \ |
|
--processes_count 16 \ |
|
--thread_count 64 \ |
|
--image_size 512 \ |
|
--resize_mode keep_ratio \ |
|
--enable_wandb False |
|
``` |
|
|
|
> *Note: Downloading large datasets can take significant time and bandwidth. Some URLs might become inactive over time.* |
|
|
|
## Citation |
|
|
|
If you use this dataset in your research or challenge participation, please cite the following paper describing the curation method: |
|
|
|
```bibtex |
|
@misc{rodríguezdevera2024precisionscaledomainspecificdatasets, |
|
title={Precision at Scale: Domain-Specific Datasets On-Demand}, |
|
author={Jesús M Rodríguez-de-Vera and Imanol G Estepa and Ignacio Sarasúa and Bhalaji Nagarajan and Petia Radeva}, |
|
year={2024}, |
|
eprint={2407.03463}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CV}, |
|
url={https://arxiv.org/abs/2407.03463}, |
|
} |
|
``` |
|
|
|
## Acknowledgements |
|
|
|
The author thankfully acknowledges the computer resources at Mare Nostrum 5 and the technical support provided by BSC (IM-2024-2-0022, IM-2023-3-0019). |
|
We acknowledge EuroHPC Joint Undertaking for awarding us access to Mare Nostrum 5 at BSC, Spain. |