ramp / README.md
tonyhong's picture
Update README.md
94d446e verified
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
multilinguality:
  - monolingual
tags:
  - nlg
  - generation
  - drone
  - data-to-text
  - agent
pretty_name: drone
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: all_train.csv
      - split: val
        path: all_val.csv
      - split: test
        path: all_test.csv
    default: true

Dataset Card for Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation (RAMP)

Hugging Face Dataset | GitHub Repository | paper | Gitlab Repository

RAMP provides a prepared version of a low-resource data-to-text corpus for drone handover message generation: structured sensor records (status + time-step object lists) paired with natural-language “handover” messages describing critical situations. The release includes raw/filtered splits and domain-specific subsets (e.g., urban, rural, ocean, desert, island, factory, disturbance, misc), suitable for training and evaluating retrieval-augmented and prompt-tuned models.


Dataset Details

Dataset Links

  • Paper (LREC-COLING 2024): Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation
  • Code & Data: GitHub repo (experiments)
  • HF Dataset: tonyhong/ramp (CSV files with train/val/test + subsets)

Dataset Description

The dataset targets low-resource data-to-text (D2T) generation where models verbalize structured inputs into faithful messages. Instances pair:

  • Input: a drone status dictionary (e.g., wind speed, battery level, altitude, pilot experience, etc.) and a time-ordered list of time-step objects near the flight path (type, distance, moving/in-path flags, timestamps).
  • Output: a handover message (English) that surfaces only critical information (e.g., “Risk of physical damage! There is a castle in the drone’s flight path at a distance of 2.5 m.”)

The RAMP paper reports a low-resource setup with 1.6K data points (input–output pairs). Inputs average 541 tokens (range 274–2481), and outputs average 149 tokens (range 29–1263), reflecting long, information-dense inputs common in real-time settings. The dataset is organized to support retrieval-augmented few-shot prompting and modular prompt-tuning.

  • Curated by: Ruitao Feng, Xudong Hong, Mayank Jobanputra, Mattes Warning, Vera Demberg
  • Language(s) (NLP): English
  • License: Apache License 2.0

Provenance: The content ultimately derives from a drone sensor/utterance corpus introduced by Chang et al. (LREC 2022). RAMP repackages/extends the resource with splits, filtered variants, and files that support retrieval-augmented and modular-prompt workflows.


Dataset Structure

The dataset is distributed as CSV files. You’ll find:

  • Top-level splits
    • all_raw_train.csv, all_raw_val.csv, all_raw_test.csv
    • Filtered counterparts: *_filtered_with_oneshot.csv
  • Domain subsets (each with train/val/test): urban_*, rural_*, ocean_*, desert_*, island_*, factory_*, disturbance_*, misc_*
  • Auxiliary files: e.g., DroneDataset_keywords_paraphrase_latest - Sheet1.csv (keywords/paraphrases), and compact “drone_v*” CSVs for minimal examples.

Data Fields (columns)

Field names below reflect the all_* CSVs; JSON is provided as strings.

  • summary (string) — The handover message text. Often contains multiple segments with inline timestamps separated by [SEP].
  • status (JSON as string) — A single time-invariant status dict for the 10-s snapshot (e.g., wind speed, battery level, altitude, pilot experience, criticality flags).
  • timestep (JSON as string) — A list of detected objects per second with attributes: name, Type, Moving, InPath, Distance, time_stamp, ID_obj.
  • related_status (JSON as string) — A reduced set of status attributes most relevant to the handover (critical attributes).
  • related_timestep (JSON as string) — A reduced set of time-step object info relevant to the handover.
  • related_sensor_data (JSON as string) — Bundles status + timestep for convenience (subsetted to relevant parts).
  • templates (string) — Template-like text variants used for retrieval/one-shot prompting (if present).
  • link (string URL) — Pointer to a short video snapshot (Google Drive) illustrating the scenario (may be unavailable/archived).
  • source (string/int) — Internal identifier/index for traceability.

Notes: Some CSVs include long JSON strings; use robust CSV readers (quotechar and escapechar set appropriately). Filtered files remove noisy rows and provide a consistent one-shot example alongside each item for RAMP-style prompting.

Splits

  • Train/Validation/Test: Provided explicitly (all_raw_*).
  • Environment-specific splits: Each environment (e.g., urban_test.csv) mirrors the global schema and supports domain generalization studies.

Uses

Direct Use

  • Data-to-Text Generation: Train/evaluate models (T5/Flan-T5/LED/others) on long, structured inputs to generate faithful handover messages.
  • Retrieval-Augmented Prompting: Use the filtered_with_oneshot files or the templates/related_* columns to build RAG-style prompts (attribute-similar examples).
  • Hallucination Analysis: Evaluate faithfulness via metrics referencing both input and output (e.g., PARENT).
  • Domain Generalization: Use the environment splits to test seen/unseen domain transfer.

Out-of-Scope Use

  • Operational decision-making for real drones: This resource is research-only; do not deploy generated text for safety-critical control.
  • Privacy-sensitive analytics: No personal data is included; it is not intended for identifying individuals or locations.

Dataset Creation

Curation Rationale

RAMP packages a low-resource D2T task that stresses faithfulness under long, structured inputs. The files facilitate retrieval-augmented few-shot prompting and modular prompt tuning (attribute-aware routing) to reduce hallucinations.

Source Data

  • Origin: Drone sensor/utterance corpus introduced by Chang et al. (LREC 2022), comprising 10-s snapshots across 8 environments (disturbance, urban, rural, ocean, desert, island, factory, misc) with paired handover messages.
  • Attributes: ~25 status/scene attributes (e.g., altitude, drone speed, battery level, visibility) plus per-second object lists (type, distance, moving/in-path).

Data Collection and Processing

  • Status & Time-step Extraction: Manually annotated status + object lists per video snapshot (1 Hz).
  • Criticality Mapping: Description Logic (DL) rules/expressions identify critical attribute-value pairs; these appear in related_status/related_timestep.
  • Preprocessing for RAMP: CSV packaging, filtered variants, and prompts/templates to support retrieval of attribute-similar examples and modular prompt routing.
  • Statistics (RAMP setup): Inputs avg ~540.8 tokens; outputs avg ~148.5 tokens; total ~1.6k pairs.

Who are the source data producers?

  • Videos & Sensor Records: Collected/curated by the original drone dataset authors (Chang et al., 2022).
  • Handover Messages: Authored by the original dataset annotators; RAMP includes them verbatim plus paraphrase/templates where indicated.

Annotations

Annotation process

The original dataset includes human-authored handover messages and DL-based content selection cues. RAMP adds no new manual labels; it surfaces relevant subsets (related_*) and templated examples to support retrieval-augmented prompting. See the paper for details.

Who are the annotators?

Original dataset annotators (per Chang et al., 2022). RAMP curators: the RAMP paper authors.


Personal and Sensitive Information

No personal or sensitive information is included. Links may point to scenario videos of environments/objects without identifiable persons. No worker IDs or personal metadata are included.


Bias, Risks, and Limitations

  • Domain specificity: Drone scenarios; transfer to unrelated domains may be limited.
  • Language: English-only messages.
  • Long inputs: Models with short context windows can truncate inputs; use long-context architectures (e.g., LED) or careful chunking.
  • Hallucinations: Despite DL cues and retrieval, faithful grounding is non-trivial—evaluate with input-aware metrics and human review.
  • Licensing of linked media: Some link URLs point to externally hosted videos; availability and terms may vary.

How to Load

from datasets import load_dataset

ds = load_dataset("tonyhong/ramp")
train = ds["train"]            # or use config/splits as hosted

from datasets import load_dataset

ds = load_dataset("tonyhong/ramp")
train = ds["train"]            # or use config/splits as hosted

# Tip: If the viewer/loader errors on CSV quoting, download locally and load
# with a robust parser (e.g., pandas with engine="python" and proper
# quotechar/escapechar).

Citation

RAMP paper (LREC-COLING 2024)
Ruitao Feng, Xudong Hong, Mayank Jobanputra, Mattes Warning, and Vera Demberg. 2024. Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation.

Upstream dataset (LREC 2022)
Ernie Chang, Alisa Kovtunova, Stefan Borgwardt, Vera Demberg, Kathryn Chapman, and Hui-Syuan Yeh. 2022. Logic-Guided Message Generation from Raw Real-Time Sensor Data.

@inproceedings{feng2024ramp,
  title={Retrieval-Augmented Modular Prompt Tuning for Low-Resource Data-to-Text Generation},
  author={Feng, Ruitao and Hong, Xudong and Jobanputra, Mayank and Warning, Mattes and Demberg, Vera},
  booktitle={Proceedings of LREC-COLING 2024},
  year={2024}
}

@inproceedings{chang2022drone,
  title={Logic-Guided Message Generation from Raw Real-Time Sensor Data},
  author={Chang, Ernie and Kovtunova, Alisa and Borgwardt, Stefan and Demberg, Vera and Chapman, Kathryn and Yeh, Hui-Syuan},
  booktitle={Proceedings of LREC 2022},
  pages={6899--6908},
  year={2022}
}

Dataset Card Authors

Xudong Hong (maintainer); with contributions from Ruitao Feng, Mayank Jobanputra, Mattes Warning, Vera Demberg.

Dataset Card Contact

[email protected]


Disclaimer

RAMP repackages data originating from a drone sensor/utterance corpus. The CSVs may contain long JSON strings; handle parsing carefully. Linked videos are provided for academic/research use; availability is not guaranteed. Do not use this dataset to operate real drones or for any safety-critical decision making.