Datasets:
license: apache-2.0
language:
- ar
tags:
- multimodal
- arabic
- common-crawl
pretty_name: Misraj Structured Data Dump (MSDD)
π Misraj Structured Data Dump (MSDD)
Misraj Structured Data Dump (MSDD) is a large-scale Arabic multimodal dataset created using our WASM pipeline. It is extracted and filtered from the Common Crawl dumps and uniquely preserves the structural integrity of web content by providing markdown output. This dataset aims to address the lack of high-quality, structured multimodal data for Arabic and accelerate research in large language and multimodal models.
π Dataset Summary
- Source: Subset from multiple Common Crawl dumps, processed with the WASM pipeline.
- Documents: 23 million documents.
- Timeframe: * 2024 Dump: Dump 10, * 2025 Dump: Dump 13
- Languages: Primarily Arabic (MSA and dialects).
- Format: Multimodal format with interleaved text and images in Markdown.
- Domain Variety: General web content.
π‘ Usage
π₯ Loading the Dataset
from datasets import load_dataset
dataset = load_dataset("Misraj/msdd")
π Example Usage
# Access the first example
example = dataset['train'][0]
print(f"Text: {example['text']}")
print(f"Images: {example['images']}")
print(f"Captions: {example['image_caption']}")
βοΈ The WASM Processing Pipeline
The performance of large language models (LLMs) and large multimodal models (LMMs) depends heavily on the quality and scale of their pre-training datasets. For Arabic, the lack of high-quality multimodal datasets that preserve document structure has limited progress. Our WASM pipeline was developed to address this gap by processing Common Crawl and generating a structured, markdown-based multimodal dataset. The pipeline is designed to preserve the structural integrity of web content while maintaining flexibility for both text-only and multimodal pre-training scenarios.
The core of the WASM pipeline involves careful filtering at both the paragraph and document level.
β Paragraph-Level Filtering
Each paragraph in the corpus undergoes the following checks:
- Character Deduplication: Removal of repeated characters beyond a threshold.
- Word Repetition Ratio: Filtering paragraphs with excessive word repetitions.
- Special Character Ratio: Filtering based on the proportion of non-standard characters.
- Language Identification: Only Arabic paragraphs are retained.
- Perplexity Scoring: Content scored using an in-house KenLM-based model trained on Wikipedia-like pages, Arabic Twitter data, and dialectal text (e.g., Lahjawi), to remove low-quality text.
β Document-Level Filtering
Each full document must pass:
- Word Repetition Ratio: Similar to paragraph level, but with different thresholds for full documents.
- Special Character Ratio: Ensures no document is dominated by symbols, code snippets, or garbage text.
- Language Identification: Verifies the document is primarily Arabic.
- Perplexity Score: Documents are filtered based on perplexity thresholds to maintain fluent, natural text.
π Dataset Structure
The dataset is structured with three main columns to support multimodal tasks. The text is interleaved with image placeholders, allowing for rich text-and-image documents.
- text: A string containing the textual content. The special token
<image>is used to denote the position where an image should be inserted. - images: A list of image URLs (strings). These images correspond sequentially to the
<image>tokens in the text field. - image_caption: A list of strings, where each string is a caption for the corresponding image in the
imageslist. If an image does not have a caption, the list will contain an empty string''at that position.
The dataset has the following features:
{
"text": {
"dtype": "string",
"_type": "Value"
},
"images": {
"feature": {
"dtype": "list",
"_type": "Value"
},
"_type": "Sequence"
},
"image_caption": {
"feature": {
"dtype": "list",
"_type": "Value"
},
"_type": "Sequence"
}
}
π¦ Quality Checks
The dataset quality was validated using:
- In-house KenLM-based Arabic models for perplexity checks.
- Manual inspection of samples.
- A pipeline inspired by OBELICS, with custom enhancements.
- Comparative analysis against major existing dataset processing pipelines to validate design choices.
π Intended Use
This dataset is intended for:
- Training large-scale multimodal Arabic language models.
- Research on Arabic NLP, including dialect modeling and low-resource language studies.
π Availability & Reproducibility
To support future research and ensure reproducibility, we are publicly releasing this representative dataset dump. The WASM processing pipeline for Arabic will also be made available to the community.
π Citation
If you use this dataset, please cite:
@misc{misraj2025msdd,
title = {Misraj Structured Data Dump (MSDD)},
author = {Khalil Hennara, Muhammad Hreden, Mohamed Motasim Hamed, Zeina Aldallal, Sara Chrouf, Safwan AlModhayan, Ahmed Bustati},
year = {2025},
publisher = {MisrajAI},
howpublished = {\url{[https://huggingface.co/datasets/Misraj/msdd](https://huggingface.co/datasets/Misraj/msdd)}}
}