File size: 1,782 Bytes
b1dcda3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0ee3299
 
 
b1dcda3
 
 
 
 
 
0ee3299
 
 
 
3d26ba6
0ee3299
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
dataset_info:
  features:
  - name: image_id
    dtype: string
  - name: image
    dtype: image
  - name: prompt_open
    dtype: string
  - name: prompt_close
    dtype: string
  - name: objects
    dtype: string
  - name: relationships
    dtype: string
  splits:
  - name: train
    num_bytes: 807872243.0
    num_examples: 5000
  download_size: 781301448
  dataset_size: 807872243.0
  task_categories:
  - image-text-to-text
  license: apache-2.0 #  Please replace with the actual license.
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

This dataset, derived from VG150, provides image-text pairs for scene graph generation. Each example includes an image, an "open" prompt, a "close" prompt, a list of objects, and their relationships.  It's designed to be used for training and evaluating models that generate scene graphs from images and textual prompts.


This dataset is used in the paper [R1-SGG: Compile Scene Graphs with Reinforcement Learning](https://huggingface.co/papers/2504.13617).

The dataset is structured as follows:

* **image_id:** Unique identifier for the image.
* **image:** The image itself.
* **prompt_open:** An open-ended prompt related to the image.
* **prompt_close:** A more specific prompt related to the image.
* **objects:** A list of objects present in the image.
* **relationships:** A description of the relationships between the objects.

**Data Usage:**

The dataset can be loaded using the `datasets` library:

```python
from datasets import load_dataset

db_train = load_dataset("JosephZ/vg150_train_sgg_prompt")["train"]
db_val = load_dataset("JosephZ/vg150_val_sgg_prompt")["train"]
```

(Further instructions from the original README regarding training and inference can be included here)