File size: 3,521 Bytes
2d46fe2
 
e0a34da
2d46fe2
 
 
 
 
 
 
 
 
505b119
 
 
 
 
 
bea3652
 
 
 
505b119
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dfdbb32
f0871bb
505b119
f0871bb
 
505b119
 
 
dfdbb32
 
2d46fe2
 
1425707
2d46fe2
8076d17
18aa84c
 
8076d17
 
6fefabd
2d46fe2
 
 
 
1425707
2d46fe2
1425707
2d46fe2
 
 
1425707
 
 
 
fe925c2
1425707
 
2d46fe2
 
 
1425707
e0a34da
 
 
1425707
 
 
 
 
 
2d46fe2
 
 
1425707
 
 
 
 
 
2d46fe2
1425707
2d46fe2
 
1425707
 
 
 
 
 
 
 
2d46fe2
1425707
2d46fe2
e0a34da
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
language: en
license: mit
size_categories:
- 1K<n<10K
task_categories:
- image-to-text
- visual-question-answering
tags:
- spatial
- dialogue
- visual-grounding
dataset_info:
  features:
  - name: instance_id
    dtype: int32
  - name: scene_key
    dtype: string
  - name: listener_view_image
    dtype: image
  - name: speaker_view_image
    dtype: image
  - name: human_speaker_message
    dtype: string
  - name: speaker_elapsed_time
    dtype: float32
  - name: positions
    dtype: string
  - name: listener_target_bbox
    dtype: string
  - name: listener_distractor_0_bbox
    dtype: string
  - name: listener_distractor_1_bbox
    dtype: string
  - name: speaker_target_bbox
    dtype: string
  - name: speaker_distractor_0_bbox
    dtype: string
  - name: speaker_distractor_1_bbox
    dtype: string
  - name: human_listener_message
    dtype: string
  - name: listener_elapsed_time
    dtype: float32
  - name: type
    dtype: string
  splits:
  - name: validation
    num_bytes: 6532068217.4
    num_examples: 2970
  download_size: 6396378608
  dataset_size: 6532068217.4
configs:
- config_name: default
  data_files:
  - split: validation
    path: data/validation-*
---

# Dataset Card for Multi-Agent Referential Communication Dataset

<div align="center">
<img src="assets/main.png" alt="Example scene" width="400"/>

*Example scene showing the speaker (left) and listener (right) views.*
</div>

## Dataset Details

### Dataset Description

This dataset contains spatial dialogue data for multi-agent referential communication tasks in 3D environments. It includes pairs of images showing speaker and listener views within photorealistic indoor scenes, along with natural language descriptions of target object locations.

The key feature of this dataset is that it captures communication between two agents with different physical perspectives in a shared 3D space. Each agent has their own unique viewpoint of the scene, requiring them to consider each other's perspectives when generating and interpreting spatial references.

### Dataset Summary

- **Size**: 2,970 dialogue instances across 1,485 scenes
- **Total Scenes Generated**: 27,504 scenes (24,644 train, 1,485 validation, 1,375 test)
- **Task Type**: Referential communication between embodied agents
- **Language(s)**: English
- **License**: MIT
- **Curated by**: University of California, Berkeley
- **Time per Task**: Median 33.0s for speakers, 10.5s for listeners

## Dataset Structure

Each instance contains:
- Speaker view image (1024x1024 resolution)
- Listener view image (1024x1024 resolution)
- Natural language referring expression from human speaker
- Target object location
- Listener object selection
- Scene metadata including:
  - Agent positions and orientations
  - Referent placement method (random vs adversarial)
  - Base environment identifier

## Dataset Creation

1. Base environments from ScanNet++ (450 high-quality 3D indoor environments)
2. Scene generation process:
   - Place two agents with controlled relative orientations (0° to 180°)
   - Place 3 referent objects using either random or adversarial placement
   - Render images from each agent's perspective
   - Apply quality filtering using GPT-4V

## Citation

**BibTeX:**
```
@article{tang2024grounding,
  title={Grounding Language in Multi-Perspective Referential Communication},
  author={Tang, Zineng and Mao, Lingjun and Suhr, Alane},
  journal={EMNLP},
  year={2024}
}
```

## Dataset Card Contact

Contact the authors at [email protected]