tonyhong commited on
Commit
4cc21a4
·
verified ·
1 Parent(s): 470eff2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +217 -1
README.md CHANGED
@@ -4,9 +4,225 @@ task_categories:
4
  - text-generation
5
  language:
6
  - en
 
 
7
  tags:
 
 
 
 
8
  - agent
9
- - climate
10
  size_categories:
11
  - n<1K
12
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - text-generation
5
  language:
6
  - en
7
+ multilinguality:
8
+ - monolingual
9
  tags:
10
+ - nlg
11
+ - generation
12
+ - drone
13
+ - data-to-text
14
  - agent
15
+ pretty_name: drone
16
  size_categories:
17
  - n<1K
18
 
19
+ configs:
20
+ - config_name: default
21
+ data_files:
22
+ - split: train
23
+ path: "all_train.csv"
24
+ - split: val
25
+ path: "all_val.csv"
26
+ - split: test
27
+ path: "all_test.csv"
28
+ default: true
29
+ ---
30
+
31
+
32
+ # Dataset Card for **Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation (RAMP)**
33
+
34
+ **[Hugging Face Dataset](https://huggingface.co/datasets/tonyhong/ramp)** | **[GitHub Repository](https://github.com/tony-hong/ramp)** | **[paper](https://aclanthology.org/2024.lrec-main.1224v2.pdf)** | **[Gitlab Repository](https://gitlab.com/forfrt/drone/-/tree/main?ref_type=heads)**
35
+
36
+ <!-- Provide a quick summary of the dataset. -->
37
+
38
+ RAMP provides a prepared version of a low-resource **data-to-text** corpus for **drone handover message generation**: structured sensor records (status + time-step object lists) paired with natural-language “handover” messages describing critical situations. The release includes raw/filtered splits and domain-specific subsets (e.g., *urban, rural, ocean, desert, island, factory, disturbance, misc*), suitable for training and evaluating retrieval-augmented and prompt-tuned models.
39
+
40
+ ---
41
+
42
+ ## Dataset Details
43
+
44
+ ### Dataset Links
45
+
46
+ - **Paper (LREC-COLING 2024):** *Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation*
47
+ - **Code & Data:** GitHub repo (experiments)
48
+ - **HF Dataset:** `tonyhong/ramp` (CSV files with train/val/test + subsets)
49
+
50
+ ### Dataset Description
51
+
52
+ The dataset targets **low-resource data-to-text (D2T)** generation where models verbalize *structured* inputs into faithful messages. Instances pair:
53
+
54
+ - **Input:** a drone **status** dictionary (e.g., wind speed, battery level, altitude, pilot experience, etc.) and a time-ordered list of **time-step objects** near the flight path (type, distance, moving/in-path flags, timestamps).
55
+ - **Output:** a **handover message** (English) that surfaces only *critical* information (e.g., “Risk of physical damage! There is a castle in the drone’s flight path at a distance of 2.5 m.”)
56
+
57
+ The RAMP paper reports a **low-resource** setup with ~**1.6K data points** (input–output pairs). Inputs average **~541 tokens** (range ~274–2481), and outputs average **~149 tokens** (range ~29–1263), reflecting long, information-dense inputs common in real-time settings. The dataset is organized to support **retrieval-augmented** few-shot prompting and **modular prompt-tuning**.
58
+
59
+ - **Curated by:** Ruitao Feng, Xudong Hong, Mayank Jobanputra, Mattes Warning, Vera Demberg
60
+ - **Language(s) (NLP):** English
61
+ - **License:** Apache License 2.0
62
+
63
+ > **Provenance:** The content ultimately derives from a drone sensor/utterance corpus introduced by Chang et al. (LREC 2022). RAMP repackages/extends the resource with splits, filtered variants, and files that support retrieval-augmented and modular-prompt workflows.
64
+
65
+ ---
66
+
67
+ ## Dataset Structure
68
+
69
+ The dataset is distributed as **CSV** files. You’ll find:
70
+
71
+ - **Top-level splits**
72
+ - `all_raw_train.csv`, `all_raw_val.csv`, `all_raw_test.csv`
73
+ - Filtered counterparts: `*_filtered_with_oneshot.csv`
74
+ - **Domain subsets** (each with `train/val/test`): `urban_*`, `rural_*`, `ocean_*`, `desert_*`, `island_*`, `factory_*`, `disturbance_*`, `misc_*`
75
+ - **Auxiliary files:** e.g., `DroneDataset_keywords_paraphrase_latest - Sheet1.csv` (keywords/paraphrases), and compact “drone_v*” CSVs for minimal examples.
76
+
77
+ ### Data Fields (columns)
78
+
79
+ > Field names below reflect the `all_*` CSVs; JSON is provided as strings.
80
+
81
+ - **`summary`** *(string)* — The handover message text. Often contains multiple segments with inline timestamps separated by `[SEP]`.
82
+ - **`status`** *(JSON as string)* — A single time-invariant status dict for the 10-s snapshot (e.g., wind speed, battery level, altitude, pilot experience, criticality flags).
83
+ - **`timestep`** *(JSON as string)* — A list of detected objects per second with attributes: `name`, `Type`, `Moving`, `InPath`, `Distance`, `time_stamp`, `ID_obj`.
84
+ - **`related_status`** *(JSON as string)* — A *reduced* set of status attributes most relevant to the handover (critical attributes).
85
+ - **`related_timestep`** *(JSON as string)* — A *reduced* set of time-step object info relevant to the handover.
86
+ - **`related_sensor_data`** *(JSON as string)* — Bundles `status` + `timestep` for convenience (subsetted to relevant parts).
87
+ - **`templates`** *(string)* — Template-like text variants used for retrieval/one-shot prompting (if present).
88
+ - **`link`** *(string URL)* — Pointer to a short video snapshot (Google Drive) illustrating the scenario (may be unavailable/archived).
89
+ - **`source`** *(string/int)* — Internal identifier/index for traceability.
90
+
91
+ > Notes: Some CSVs include long JSON strings; use robust CSV readers (`quotechar` and `escapechar` set appropriately). Filtered files remove noisy rows and provide a consistent one-shot example alongside each item for RAMP-style prompting.
92
+
93
+ ### Splits
94
+
95
+ - **Train/Validation/Test:** Provided explicitly (`all_raw_*`).
96
+ - **Environment-specific splits:** Each environment (e.g., `urban_test.csv`) mirrors the global schema and supports domain generalization studies.
97
+
98
+ ---
99
+
100
+ ## Uses
101
+
102
+ ### Direct Use
103
+
104
+ - **Data-to-Text Generation:** Train/evaluate models (T5/Flan-T5/LED/others) on long, structured inputs to generate faithful handover messages.
105
+ - **Retrieval-Augmented Prompting:** Use the *filtered_with_oneshot* files or the `templates`/`related_*` columns to build **RAG-style** prompts (attribute-similar examples).
106
+ - **Hallucination Analysis:** Evaluate faithfulness via metrics referencing both input and output (e.g., PARENT).
107
+ - **Domain Generalization:** Use the environment splits to test seen/unseen domain transfer.
108
+
109
+ ### Out-of-Scope Use
110
+
111
+ - **Operational decision-making for real drones:** This resource is **research-only**; do not deploy generated text for safety-critical control.
112
+ - **Privacy-sensitive analytics:** No personal data is included; it is not intended for identifying individuals or locations.
113
+
114
+ ---
115
+
116
+ ## Dataset Creation
117
+
118
+ ### Curation Rationale
119
+
120
+ RAMP packages a **low-resource** D2T task that stresses **faithfulness** under long, structured inputs. The files facilitate **retrieval-augmented** few-shot prompting and **modular prompt tuning** (attribute-aware routing) to reduce hallucinations.
121
+
122
+ ### Source Data
123
+
124
+ - **Origin:** Drone sensor/utterance corpus introduced by Chang et al. (LREC 2022), comprising 10-s snapshots across **8 environments** (*disturbance, urban, rural, ocean, desert, island, factory, misc*) with paired handover messages.
125
+ - **Attributes:** ~**25** status/scene attributes (e.g., altitude, drone speed, battery level, visibility) plus per-second object lists (type, distance, moving/in-path).
126
+
127
+ ### Data Collection and Processing
128
+
129
+ - **Status & Time-step Extraction:** Manually annotated status + object lists per video snapshot (1 Hz).
130
+ - **Criticality Mapping:** Description Logic (DL) rules/expressions identify **critical** attribute-value pairs; these appear in `related_status`/`related_timestep`.
131
+ - **Preprocessing for RAMP:** CSV packaging, filtered variants, and prompts/templates to support **retrieval** of attribute-similar examples and **modular** prompt routing.
132
+ - **Statistics (RAMP setup):** Inputs avg ~**540.8** tokens; outputs avg ~**148.5** tokens; total ~**1.6k** pairs.
133
+
134
+ ### Who are the source data producers?
135
+
136
+ - **Videos & Sensor Records:** Collected/curated by the original drone dataset authors (Chang et al., 2022).
137
+ - **Handover Messages:** Authored by the original dataset annotators; RAMP includes them verbatim plus paraphrase/templates where indicated.
138
+
139
+ ---
140
+
141
+ ## Annotations
142
+
143
+ ### Annotation process
144
+
145
+ The **original** dataset includes human-authored handover messages and DL-based content selection cues. RAMP adds no new manual labels; it surfaces **relevant subsets** (`related_*`) and templated examples to support retrieval-augmented prompting. See the paper for details.
146
+
147
+ ### Who are the annotators?
148
+
149
+ Original dataset annotators (per Chang et al., 2022). RAMP curators: the RAMP paper authors.
150
+
151
+ ---
152
+
153
+ ## Personal and Sensitive Information
154
+
155
+ No personal or sensitive information is included. Links may point to scenario videos of environments/objects without identifiable persons. No worker IDs or personal metadata are included.
156
+
157
+ ---
158
+
159
+ ## Bias, Risks, and Limitations
160
+
161
+ - **Domain specificity:** Drone scenarios; transfer to unrelated domains may be limited.
162
+ - **Language:** English-only messages.
163
+ - **Long inputs:** Models with short context windows can truncate inputs; use long-context architectures (e.g., LED) or careful chunking.
164
+ - **Hallucinations:** Despite DL cues and retrieval, faithful grounding is non-trivial—evaluate with input-aware metrics and human review.
165
+ - **Licensing of linked media:** Some `link` URLs point to externally hosted videos; availability and terms may vary.
166
+
167
+ ---
168
+
169
+ ## How to Load
170
+
171
+ ```python
172
+ from datasets import load_dataset
173
+
174
+ ds = load_dataset("tonyhong/ramp")
175
+ train = ds["train"] # or use config/splits as hosted
176
+
177
+ from datasets import load_dataset
178
+
179
+ ds = load_dataset("tonyhong/ramp")
180
+ train = ds["train"] # or use config/splits as hosted
181
+
182
+ # Tip: If the viewer/loader errors on CSV quoting, download locally and load
183
+ # with a robust parser (e.g., pandas with engine="python" and proper
184
+ # quotechar/escapechar).
185
+
186
+ ```
187
+
188
+
189
+ ## Citation
190
+
191
+ **RAMP paper (LREC-COLING 2024)**
192
+ Ruitao Feng, Xudong Hong, Mayank Jobanputra, Mattes Warning, and Vera Demberg. 2024. *Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation.*
193
+
194
+ **Upstream dataset (LREC 2022)**
195
+ Ernie Chang, Alisa Kovtunova, Stefan Borgwardt, Vera Demberg, Kathryn Chapman, and Hui-Syuan Yeh. 2022. *Logic-Guided Message Generation from Raw Real-Time Sensor Data.*
196
+
197
+ ```bibtex
198
+ @inproceedings{feng2024ramp,
199
+ title={Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation},
200
+ author={Feng, Ruitao and Hong, Xudong and Jobanputra, Mayank and Warning, Mattes and Demberg, Vera},
201
+ booktitle={Proceedings of LREC-COLING 2024},
202
+ year={2024}
203
+ }
204
+
205
+ @inproceedings{chang2022drone,
206
+ title={Logic-Guided Message Generation from Raw Real-Time Sensor Data},
207
+ author={Chang, Ernie and Kovtunova, Alisa and Borgwardt, Stefan and Demberg, Vera and Chapman, Kathryn and Yeh, Hui-Syuan},
208
+ booktitle={Proceedings of LREC 2022},
209
+ pages={6899--6908},
210
+ year={2022}
211
+ }
212
+ ```
213
+
214
+
215
+ ## Dataset Card Authors
216
+
217
+ Xudong Hong (maintainer); with contributions from Ruitao Feng, Mayank Jobanputra, Mattes Warning, Vera Demberg.
218
+
219
+ ## Dataset Card Contact
220
+
221
222
+
223
+ ---
224
+
225
+ ## Disclaimer
226
+
227
+ RAMP repackages data originating from a drone sensor/utterance corpus. The CSVs may contain long JSON strings; handle parsing carefully. Linked videos are provided for academic/research use; availability is not guaranteed. **Do not** use this dataset to operate real drones or for any safety-critical decision making.
228
+