File size: 4,933 Bytes
32e644c 57d852e f254b08 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
---
license: apache-2.0
dataset_info:
- config_name: temporal_order
features:
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: label
dtype: string
splits:
- name: test
num_bytes: 211564460.0
num_examples: 720
download_size: 202986206
dataset_size: 211564460.0
- config_name: timelapse_estimation
features:
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: label
dtype: string
splits:
- name: test
num_bytes: 48450099.0
num_examples: 125
download_size: 48184050
dataset_size: 48450099.0
configs:
- config_name: temporal_order
data_files:
- split: test
path: temporal_order/test-*
- config_name: timelapse_estimation
data_files:
- split: test
path: timelapse_estimation/test-*
---
### **Dataset Description**
The Temporal-VQA dataset is a challenging benchmark designed to evaluate the temporal reasoning capabilities of Multimodal Large Language Models (MLLMs) in tasks requiring visual temporal understanding. It emphasizes real-world temporal dynamics through two core evaluation tasks:-
- **Temporal Order Understanding:** This task presents MLLMs with temporally consecutive frames from video sequences. The models must analyze and determine the correct sequence of events, assessing their ability to comprehend event progression over time.
- **Time-Lapse Estimation:** In this task, MLLMs are shown pairs of images taken at varying time intervals. The models are required to estimate the time-lapse between the images by selecting from multiple-choice options that span from seconds to years.
### **GPT4o Usage**
- The __Temporal Order Understanding__ task contains 720 image pairs of which 360 image pairs are unique image pairs (while the other 360 are image pairs in reveresed position) created by sampling frames from copyright-free videos.
- The __Timelapse Estimation__ task contains 125 image pairs compiled from copyright-free sources. The _image_A_ refers to the image that was taken first and the _image_B_ refers to the latest image.
```python
from datasets import load_dataset
import base64
import requests
import os
from io import BytesIO
API_KEY = os.environ.get("API_KEY")
def encode_image(image):
buffer = BytesIO()
image.save(buffer, format="JPEG")
return base64.b64encode(buffer.getvalue()).decode('utf-8')
def get_gpt_response(image1, image2, query):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}
payload = {
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": query},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image1}"}},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image2}"}}
]
}
],
"max_tokens": 512
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
return response.json()
### TASK 1
dataset = load_dataset('fazliimam/temporal-vqa', 'temporal_order', split='test')
image1 = encode_image(dataset[0]['image_1'])
image2 = encode_image(dataset[0]['image_2'])
prompt_1 = "Did the event in the first image happen before the event in the second image? Provide your answer in dictionary format: {'Answer':'True or False', 'Reasoning':'Brief explanation of your choice'}"
prompt_2 = "Between these two images, which one depicts the event that happened first? Provide your answer in dictionary format: {'Answer':'First image or Second image', 'Reasoning':'Brief explanation of your choice'}"
response = get_gpt_response(image1, image2, prompt_1)
print(response)
### TASK 2
dataset = load_dataset('fazliimam/temporal-vqa', 'timelapse_estimation', split='test')
image1 = encode_image(dataset[0]['image_1'])
image2 = encode_image(dataset[0]['image_2'])
prompt = "In the given image, estimate the time that has passed between the first image (left) and the second image (right). Choose one of the following options: A. Less than 15 seconds B. Between 2 minutes to 15 minutes C. Between 1 hour to 12 hours D. Between 2 days to 30 days E. Between 4 months to 12 months F. More than 3 years. Provide your answer in dictionary format: {'Answer':'Selected option', 'Reasoning':'Brief explanation of your choice'}"
response = get_gpt_response(image1, image2, prompt)
print(response)
```
### **Cite Us**
```
@misc{imam2025multimodalllmsvisualtemporal,
title={Can Multimodal LLMs do Visual Temporal Understanding and Reasoning? The answer is No!},
author={Mohamed Fazli Imam and Chenyang Lyu and Alham Fikri Aji},
year={2025},
eprint={2501.10674},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.10674},
}
``` |