---
license: apache-2.0
dataset_info:
features:
- name: id
dtype: string
- name: image
dtype: image
- name: descriptive_q
dtype: string
- name: descriptive_a
dtype: string
- name: reasoning_q
dtype: string
- name: reasoning_a
dtype: string
splits:
- name: test
num_bytes: 277763523.0
num_examples: 50
download_size: 277777671
dataset_size: 277763523.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
## Introduction
NutritionQA is a novel benchmark for understanding photos of nutrition labels, with practical applications like aiding users with visual impairments.
NutritionQA contains 50 photos of nutrition labels, each photo is paired with a descriptive question and a reasoning question (requires multi-hop reasoning).
The figure below shows that open VLMs perform poorly on NutritionQA, even after training on millions of images. Our [code-guided synthetic data generation system](https://github.com/allenai/pixmo-docs) can create targeted data (synthetic nutrition labels) to fine-tune VLMs, which achieves competitive performance with significantly less data.
## Evaluation
To compute the metrics, please use the following script:
```python
def score(question: str, target: str, prediction: str) -> bool:
prompt = f"""You are tasked with evaluating a model's prediction for a question-answering task. Your goal is to determine if the model's prediction is correct based on its meaning, not necessarily exact wording.
Here is the question that was asked:
{question}
The ground-truth answer to this question is:
{target}
The model's prediction for this question is:
{prediction}
To evaluate the model's prediction, follow these steps:
1. Carefully read the question, correct answer, and model's prediction.
2. Compare the meaning of the model's prediction to the correct answer, not just the exact wording.
3. Consider if the model's prediction accurately answers the question, even if it uses different phrasing or includes additional information.
4. Determine if any discrepancies between the prediction and correct answer are significant enough to consider the prediction incorrect.
5. For numerical answers, consider if the prediction is within a reasonable range (5%) of the correct answer.
Please answer with yes or no to indicate if the model's prediction is correct based on the question and correct answer provided.
"""
response = get_chat_response(prompt=prompt,
api_key=os.getenv("OPENAI_API_KEY"),
model="gpt-4o-mini-2024-07-18")
response = response.lower().strip()
if "yes" in response:
return True
elif "no" in response:
return False
else:
return False
```
## Citation
Please cite our papers if you use this model in your work:
```
@article{yang2025scaling,
title={Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation},
author={Yue Yang and Ajay Patel and Matt Deitke and Tanmay Gupta and Luca Weihs and Andrew Head and Mark Yatskar and Chris Callison-Burch and Ranjay Krishna and Aniruddha Kembhavi and Christopher Clark},
journal={arXiv preprint arXiv:2502.14846},
year={2025}
}
```
```
@article{deitke2024molmo,
title={Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models},
author={Deitke, Matt and Clark, Christopher and Lee, Sangho and Tripathi, Rohun and Yang, Yue and Park, Jae Sung and Salehi, Mohammadreza and Muennighoff, Niklas and Lo, Kyle and Soldaini, Luca and others},
journal={arXiv preprint arXiv:2409.17146},
year={2024}
}
```