Datasets:

Modalities:
Image
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 3,858 Bytes
fb0a781
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
346fae7
fb0a781
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
16d7f06
fb0a781
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: apache-2.0
dataset_info:
  features:
  - name: id
    dtype: string
  - name: image
    dtype: image
  - name: descriptive_q
    dtype: string
  - name: descriptive_a
    dtype: string
  - name: reasoning_q
    dtype: string
  - name: reasoning_a
    dtype: string
  splits:
  - name: test
    num_bytes: 277763523.0
    num_examples: 50
  download_size: 277777671
  dataset_size: 277763523.0
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

## Introduction
NutritionQA is a novel benchmark for understanding photos of nutrition labels, with practical applications like aiding users with visual impairments. 
NutritionQA contains 50 photos of nutrition labels, each photo is paired with a descriptive question and a reasoning question (requires multi-hop reasoning).
The figure below shows that open VLMs perform poorly on NutritionQA, even after training on millions of images. Our [code-guided synthetic data generation system](https://github.com/allenai/pixmo-docs) can create targeted data (synthetic nutrition labels) to fine-tune VLMs, which achieves competitive performance with significantly less data.

<img src="https://cdn-uploads.huggingface.co/production/uploads/62f6c68904e5e02f82b04690/ggn6LrNcVX8huTvWmA7_w.png" width="600">

## Evaluation
To compute the metrics, please use the following script:

```python
def score(question: str, target: str, prediction: str) -> bool:
    prompt = f"""You are tasked with evaluating a model's prediction for a question-answering task. Your goal is to determine if the model's prediction is correct based on its meaning, not necessarily exact wording.

Here is the question that was asked:
<q>
{question}
</q>

The ground-truth answer to this question is:
<gt>
{target}
</gt>

The model's prediction for this question is:
<pred>
{prediction}
</pred>

To evaluate the model's prediction, follow these steps:
1. Carefully read the question, correct answer, and model's prediction.
2. Compare the meaning of the model's prediction to the correct answer, not just the exact wording.
3. Consider if the model's prediction accurately answers the question, even if it uses different phrasing or includes additional information.
4. Determine if any discrepancies between the prediction and correct answer are significant enough to consider the prediction incorrect.
5. For numerical answers, consider if the prediction is within a reasonable range (5%) of the correct answer.

Please answer with yes or no to indicate if the model's prediction is correct based on the question and correct answer provided.
"""

    response = get_chat_response(prompt=prompt, 
                                 api_key=os.getenv("OPENAI_API_KEY"),
                                 model="gpt-4o-mini-2024-07-18")
    
    response = response.lower().strip()

    if "yes" in response:
        return True
    elif "no" in response:
        return False
    else:
        return False
```

## Citation
Please cite our papers if you use this model in your work:

```
@article{yang2025scaling,
      title={Scaling Text-Rich Image Understanding via Code-Guided Synthetic Multimodal Data Generation}, 
      author={Yue Yang and Ajay Patel and Matt Deitke and Tanmay Gupta and Luca Weihs and Andrew Head and Mark Yatskar and Chris Callison-Burch and Ranjay Krishna and Aniruddha Kembhavi and Christopher Clark},
      journal={arXiv preprint arXiv:2502.14846},
      year={2025}
}
```

```
@article{deitke2024molmo,
  title={Molmo and pixmo: Open weights and open data for state-of-the-art multimodal models},
  author={Deitke, Matt and Clark, Christopher and Lee, Sangho and Tripathi, Rohun and Yang, Yue and Park, Jae Sung and Salehi, Mohammadreza and Muennighoff, Niklas and Lo, Kyle and Soldaini, Luca and others},
  journal={arXiv preprint arXiv:2409.17146},
  year={2024}
}
```