Dataset Viewer (First 5GB)
Auto-converted to Parquet
Search is not available for this dataset
audio
audioduration (s)
0.8
19.9
End of preview. Expand in Data Studio

Data Directory Structure

The Ar-MUSA directory contains annotated datasets organized by batches and annotation teams. Each batch is labeled with a number, and the annotation team is indicated by a letter. The structure is as follows:

Ar-MUSA
├── Annotation 1a
│   ├── frames        # Contains the extracted frames for each record
│   ├── audios        # Contains the corresponding audio files
│   ├── transcripts   # Contains the transcripts of the audio files
│   └── annotations.csv  # CSV file with annotations for each record
│
├── Annotation 1b
│   ├── frames        # Contains the extracted frames for each record
│   ├── audios        # Contains the corresponding audio files
│   ├── transcripts   # Contains the transcripts of the audio files
│   └── annotations.csv  # CSV file with annotations for each record
│
└── Annotation 2a
    ├── frames        # Contains the extracted frames for each record
    ├── audios        # Contains the corresponding audio files
    ├── transcripts   # Contains the transcripts of the audio files
    └── annotations.csv  # CSV file with annotations for each record

Explanation:

  • Annotation Batches (1a, 1b, 2a, etc.):
    • The number represents the batch number (e.g., Batch 1, Batch 2).
    • The letter indicates the team responsible for the annotation (e.g., Team A, Team B).

Contents of Each Batch:

  1. frames/: A folder containing extracted video frames for each record.
  2. audios/: A folder with the corresponding audio files for the annotated records.
  3. transcripts/: A folder containing the text transcripts of the audio files.
  4. annotations.csv: A CSV file that includes the annotations for each record, detailing sentiment labels, sarcasm markers, and other relevant metadata.

License

The AR-MUSA dataset is licensed under the Academic Free License 3.0 (afl-3.0) and is provided for research purposes only. Any use of this dataset must comply with the terms of this license.

Research Paper

For detailed information on the dataset construction, validation procedures, and experimental results, please refer to our published paper:

Khaled, S., Ragab, M. E., Helmy, A. K., Medhat, W., & Mohamed, E. H. (2025). AR-MUSA: A multimodal benchmark dataset and evaluation framework for Arabic sentiment analysis. International Journal of Intelligent Engineering and Systems, 18(4), 30–44.
🔗 Read the paper


Citation

If you use the AR-MUSA dataset in your research, please cite the following paper:

@article{khaled2025ar,
  title={AR-MUSA: a multimodal benchmark dataset and evaluation framework for Arabic sentiment analysis},
  author={Khaled, S. and Ragab, M. E. and Helmy, A. K. and Medhat, W. and Mohamed, E. H.},
  journal={International Journal of Intelligent Engineering and Systems},
  volume={18},
  number={4},
  pages={30-44},
  year={2025},
  doi={10.22266/ijies2025.0531.03}
}
Downloads last month
136