metadata
license: apache-2.0
dataset_info:
- config_name: temporal_order
features:
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: label
dtype: string
splits:
- name: test
num_bytes: 211564460
num_examples: 720
download_size: 202986206
dataset_size: 211564460
- config_name: timelapse_estimation
features:
- name: image_1
dtype: image
- name: image_2
dtype: image
- name: label
dtype: string
splits:
- name: test
num_bytes: 48450099
num_examples: 125
download_size: 48184050
dataset_size: 48450099
configs:
- config_name: temporal_order
data_files:
- split: test
path: temporal_order/test-*
- config_name: timelapse_estimation
data_files:
- split: test
path: timelapse_estimation/test-*
Dataset Description
The Temporal-VQA dataset is a challenging benchmark designed to evaluate the temporal reasoning capabilities of Multimodal Large Language Models (MLLMs) in tasks requiring visual temporal understanding. It emphasizes real-world temporal dynamics through two core evaluation tasks:-
- Temporal Order Understanding: This task presents MLLMs with temporally consecutive frames from video sequences. The models must analyze and determine the correct sequence of events, assessing their ability to comprehend event progression over time.
- Time-Lapse Estimation: In this task, MLLMs are shown pairs of images taken at varying time intervals. The models are required to estimate the time-lapse between the images by selecting from multiple-choice options that span from seconds to years.
GPT4o Usage
- The Temporal Order Understanding task contains 720 image pairs of which 360 image pairs are unique image pairs (while the other 360 are image pairs in reveresed position) created by sampling frames from copyright-free videos.
- The Timelapse Estimation task contains 125 image pairs compiled from copyright-free sources. The image_A refers to the image that was taken first and the image_B refers to the latest image.
from datasets import load_dataset
import base64
import requests
import os
from io import BytesIO
API_KEY = os.environ.get("API_KEY")
def encode_image(image):
buffer = BytesIO()
image.save(buffer, format="JPEG")
return base64.b64encode(buffer.getvalue()).decode('utf-8')
def get_gpt_response(image1, image2, query):
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
}
payload = {
"model": "gpt-4o",
"messages": [
{
"role": "user",
"content": [
{"type": "text", "text": query},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image1}"}},
{"type": "image_url", "image_url": {"url": f"data:image/jpeg;base64,{image2}"}}
]
}
],
"max_tokens": 512
}
response = requests.post("https://api.openai.com/v1/chat/completions", headers=headers, json=payload)
return response.json()
### TASK 1
dataset = load_dataset('fazliimam/temporal-vqa', 'temporal_order', split='test')
image1 = encode_image(dataset[0]['image_1'])
image2 = encode_image(dataset[0]['image_2'])
prompt_1 = "Did the event in the first image happen before the event in the second image? Provide your answer in dictionary format: {'Answer':'True or False', 'Reasoning':'Brief explanation of your choice'}"
prompt_2 = "Between these two images, which one depicts the event that happened first? Provide your answer in dictionary format: {'Answer':'First image or Second image', 'Reasoning':'Brief explanation of your choice'}"
response = get_gpt_response(image1, image2, prompt_1)
print(response)
### TASK 2
dataset = load_dataset('fazliimam/temporal-vqa', 'timelapse_estimation', split='test')
image1 = encode_image(dataset[0]['image_1'])
image2 = encode_image(dataset[0]['image_2'])
prompt = "In the given image, estimate the time that has passed between the first image (left) and the second image (right). Choose one of the following options: A. Less than 15 seconds B. Between 2 minutes to 15 minutes C. Between 1 hour to 12 hours D. Between 2 days to 30 days E. Between 4 months to 12 months F. More than 3 years. Provide your answer in dictionary format: {'Answer':'Selected option', 'Reasoning':'Brief explanation of your choice'}"
response = get_gpt_response(image1, image2, prompt)
print(response)
Cite Us
@misc{imam2025multimodalllmsvisualtemporal,
title={Can Multimodal LLMs do Visual Temporal Understanding and Reasoning? The answer is No!},
author={Mohamed Fazli Imam and Chenyang Lyu and Alham Fikri Aji},
year={2025},
eprint={2501.10674},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.10674},
}