Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
image
image |
|---|
DATAD: Driver Attention in Takeover of Autonomous Driving
Dataset Overview
This dataset provides multimodal recordings for analyzing driver attention during takeover scenarios in autonomous driving.
It includes gaze–object annotations, per-frame feature vectors, and instance segmentation outputs, supporting research in driver monitoring, gaze estimation, takeover performance, and semantic scene understanding.
Data Organization and Participants
Data are organized per participant, with each participant’s data compressed and uploaded individually in 7Z format.
- Tester1–Tester10: university students with driving experience
- Tester11–Tester30: experienced drivers (ride-hailing drivers)
The two participant groups were exposed to different scenario designs.
Scenario Design
Tester1–Tester10 (Student Drivers)
Two major categories of explicit high-risk scenarios, each containing:
- One primary risk
- One secondary risk
Scenario categories:
- Road construction ahead
- Sudden intrusion of non-motorized vehicles
Each category includes multiple concrete scenarios generated by varying background vehicle behaviors.
Tester11–Tester30 (Experienced Drivers)
Progressive risk scenarios with latent and gradually emerging hazards, divided into two major categories:
- Right-side vehicle squeezing lane change + left-side non-motorized sudden appearance
- Left-side non-motorized vehicle intrusion + front traffic accident
Similarly, each category is instantiated into multiple scenarios by adjusting background traffic behaviors.
Overall, the dataset enables comparative analysis of driver attention and takeover behavior across driver experience levels and scenario complexities.
📂 Dataset Structure
Tester1/
├── Gaze_object_output/
│ ├── Stare_obj_0.csv # Gaze target data for scene 1
│ ├── Stare_obj_1.csv
│ └── ...
│
├── Tester1_feature_csv/
│ ├── feature_0.csv # Feature vectors for scene 1
│ ├── feature_1.csv
│ └── ...
│
├── Tester1_IS/
│ ├── Tester1_0_IS/
│ │ ├── frame_output/ # Instance segmentation images (PNG frames)
│ │ │ ├── frame_1.png
│ │ │ └── ...
│ │ └── obj_pixel_table.csv # Pixel-level statistics for each segmented vehicle
│ ├── Tester1_1_IS/
│ └── ...
Gaze_object_output/
This directory contains gaze–object annotation files for each scene. File format
<scene_id>: scene index (starting from 0)- Each row corresponds to one time step / frame
Gaze Target Information
Stare_obj: ID of the object being gazed at0indicates background or no valid gaze target
Stare_area: coarse gaze region label on the screen The screen resolution is 5740 × 1010, and gaze regions are defined as:Label Description Region (pixels) LFLeft front view [0, 0] – [2870, 1010]RFRight front view [2870, 0] – [5740, 1010]LBLeft side mirror [700, 570] – [1370, 1000]RBRight side mirror [4719, 560] – [5389, 990]MBRear-view mirror [2890, 210] – [3540, 400]
Vehicle Screen Positions
Car{i}_screen_X,Car{i}_screen_Y: 2D screen-space coordinates of risk-relevant vehicles- Coordinates are aligned with the same frame as gaze annotations
- Missing vehicles are filled with
0Number of risk objects per frame - For the first 10 participants, each frame contains up to 9 risk objects
- For the remaining 20 participants, scenes 5–9 contain 8 risk objects
- Columns are kept consistent across files; unused slots are zero-padded
These files jointly describe where the driver is looking and where potential risk objects are located on the screen at each time step, and are time-aligned with other modalities in the dataset.
Tester1_feature_csv/
This directory contains per-frame driving state and scene feature files for each scene. File format
<scene_id>: scene index (starting from 0)- Each row corresponds to one time step / frame
- Rows are time-aligned with gaze annotations and instance segmentation outputs
Ego Vehicle and Driver State
time: timestamp (Unix time)steering: steering wheel angleaccelerator: accelerator pedal valuebrake: brake pedal valueTOR_flag: take-over request indicatorHandchange_flag: handover / control change indicatorCollision_flag: collision indicator (binary)
Ego Vehicle Position
main_car_id: ID of the ego vehiclemain_car_x,main_car_y: ego vehicle position in world coordinates
Surrounding Risk Object Features
For each risk-relevant object in the scene, features are stored using indexed columns:
- Object indices:
Car1…Car9 - Typical attributes include:
- World-space position
- Screen-space position (
Car{i}_screen_X,Car{i}_screen_Y) - Additional kinematic or geometric features
If a risk object is not present in a frame, its corresponding feature values are filled with
0.
Gaze Point Projection
ScreenPoint2D_x,ScreenPoint2D_y: projected 2D gaze point on the screen, aligned with gaze annotations
These files provide low-level driving signals, ego vehicle states, and scene-level object features, and are intended to be used jointly with:
Gaze_object_output/(gaze–object annotations)Tester*_IS/(instance segmentation outputs)
Tester*_IS/Tester*_<scene_id>_IS/
This directory contains instance segmentation (IS) outputs for each scene, generated using CARLA 0.9.15.
Each subfolder corresponds to one scene and includes the following files:
frame_output/
- A sequence of PNG images representing instance segmentation foregrounds
- Each image corresponds to one frame, and is row-aligned with the CSV files in other modalities
- This design enables precise frame-level alignment and multimodal analysis
obj_pixel_table.csv
- A lookup table mapping vehicle IDs to instance segmentation pixel values
- Required because in CARLA 0.9.15, instance segmentation assigns random pixel values to objects in each run
- This file provides the ground-truth correspondence between vehicles and their pixel labels for this specific scene
processed_screenshot.png
- A top-down overview image captured at the start of the takeover recording
- Visualizes the vehicle–pixel correspondence, where connecting lines indicate matched vehicles and pixel labels
- This file is intended for validation and debugging only
Important note:
If abnormal vertical lines or incorrect non-motorized object segmentation are observed in processed_screenshot.png, the data in this folder should not be used, as it indicates unreliable instance segmentation results for the scene.
Together, these files support pixel-level, object-aware analysis of driver attention and scene context, and are designed to be used jointly with gaze annotations and feature CSV files.
- Downloads last month
- 250