Hennara commited on
Commit
dfabfc2
·
verified ·
1 Parent(s): 3315227

Upload wasm_readme.md

Browse files
Files changed (1) hide show
  1. wasm_readme.md +150 -0
wasm_readme.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## **license: apache-2.0**
2
+
3
+ # **📚 Misraj Structured Data Dump (MSDD)**
4
+
5
+ Misraj Structured Data Dump (MSDD) is a large-scale Arabic multimodal dataset created using our **WASM pipeline**. It is extracted and filtered from the [Common Crawl](https://commoncrawl.org/) dumps and uniquely preserves the structural integrity of web content by providing markdown output. This dataset aims to address the lack of high-quality, structured multimodal data for Arabic and accelerate research in large language and multimodal models.
6
+
7
+
8
+ ## **📌 Dataset Summary**
9
+
10
+ - **Source:** Subset from multiple Common Crawl dumps, processed with the WASM pipeline.
11
+
12
+ - **Documents:** 23 million documents.
13
+
14
+ - **Timeframe:** \* 2024 Dump: Dump 10
15
+
16
+ * 2025 Dump: Dump 13
17
+
18
+ - **Languages:** Primarily Arabic (MSA and dialects).
19
+
20
+ - **Format:** Multimodal format with interleaved text and images in Markdown.
21
+
22
+ - **Domain Variety:** General web content.
23
+
24
+
25
+ ## **💡 Usage**
26
+
27
+ ### **📥 Loading the Dataset**
28
+
29
+ from datasets import load\_dataset\
30
+ \
31
+ dataset = load\_dataset("Misraj/msdd")
32
+
33
+
34
+ ### **📋 Example Usage**
35
+
36
+ \# Access the first example\
37
+ example = dataset\['train']\[0]\
38
+ print(f"Text: {example\['text']}")\
39
+ print(f"Images: {example\['images']}")\
40
+ print(f"Captions: {example\['image\_caption']}")
41
+
42
+
43
+ ## **⚙️ The WASM Processing Pipeline**
44
+
45
+ The performance of large language models (LLMs) and large multimodal models (LMMs) depends heavily on the quality and scale of their pre-training datasets. For Arabic, the lack of high-quality multimodal datasets that preserve document structure has limited progress. Our **WASM pipeline** was developed to address this gap by processing Common Crawl and generating a structured, markdown-based multimodal dataset. The pipeline is designed to preserve the structural integrity of web content while maintaining flexibility for both text-only and multimodal pre-training scenarios.
46
+
47
+ The core of the WASM pipeline involves careful filtering at both the paragraph and document level.
48
+
49
+
50
+ ### **✅ _Paragraph-Level Filtering_**
51
+
52
+ Each paragraph in the corpus undergoes the following checks:
53
+
54
+ - **Character Deduplication:** Removal of repeated characters beyond a threshold.
55
+
56
+ - **Word Repetition Ratio:** Filtering paragraphs with excessive word repetitions.
57
+
58
+ - **Special Character Ratio:** Filtering based on the proportion of non-standard characters.
59
+
60
+ - **Language Identification:** Only Arabic paragraphs are retained.
61
+
62
+ - **Perplexity Scoring:** Content scored using an in-house KenLM-based model trained on Wikipedia-like pages, Arabic Twitter data, and dialectal text (e.g., Lahjawi), to remove low-quality text.
63
+
64
+
65
+ ### **✅ _Document-Level Filtering_**
66
+
67
+ Each full document must pass:
68
+
69
+ - **Word Repetition Ratio:** Similar to paragraph level, but with different thresholds for full documents.
70
+
71
+ - **Special Character Ratio:** Ensures no document is dominated by symbols, code snippets, or garbage text.
72
+
73
+ - **Language Identification:** Verifies the document is primarily Arabic.
74
+
75
+ - **Perplexity Score:** Documents are filtered based on perplexity thresholds to maintain fluent, natural text.
76
+
77
+
78
+ ## **📂 Dataset Structure**
79
+
80
+ The dataset is structured with three main columns to support multimodal tasks. The text is interleaved with image placeholders, allowing for rich text-and-image documents.
81
+
82
+ - **text**: A string containing the textual content. The special token \<image> is used to denote the position where an image should be inserted.
83
+
84
+ - **images**: A list of image URLs (strings). These images correspond sequentially to the \<image> tokens in the text field.
85
+
86
+ - **image\_caption**: A list of strings, where each string is a caption for the corresponding image in the images list. If an image does not have a caption, the list will contain an empty string '' at that position.
87
+
88
+ The dataset has the following features:
89
+ ```json
90
+ {\
91
+   "text": {\
92
+     "dtype": "string",\
93
+     "\_type": "Value"\
94
+   },\
95
+   "images": {\
96
+     "feature": {\
97
+       "dtype": "list",\
98
+       "\_type": "Value"\
99
+     },\
100
+     "\_type": "Sequence"\
101
+   },\
102
+   "image\_caption": {\
103
+     "feature": {\
104
+       "dtype": "list",\
105
+       "\_type": "Value"\
106
+     },\
107
+     "\_type": "Sequence"\
108
+   }\
109
+ }
110
+ ```
111
+
112
+ ## **🚦 Quality Checks**
113
+
114
+ The dataset quality was validated using:
115
+
116
+ - In-house KenLM-based Arabic models for perplexity checks.
117
+
118
+ - Manual inspection of samples.
119
+
120
+ - A pipeline inspired by [OBELICS](https://github.com/huggingface/OBELICS), with custom enhancements.
121
+
122
+ - Comparative analysis against major existing dataset processing pipelines to validate design choices.
123
+
124
+
125
+ ## **🔍 Intended Use**
126
+
127
+ This dataset is intended for:
128
+
129
+ - Training large-scale multimodal Arabic language models.
130
+
131
+ - Research on Arabic NLP, including dialect modeling and low-resource language studies.
132
+
133
+
134
+ ## **🌐 Availability & Reproducibility**
135
+
136
+ To support future research and ensure reproducibility, we are publicly releasing this representative dataset dump. The WASM processing pipeline for Arabic will also be made available to the community.
137
+
138
+
139
+ ## **📝 Citation**
140
+
141
+ If you use this dataset, please cite:
142
+ ```bibtex
143
+ @misc{misraj2025msdd,\
144
+   title        = {Misraj Structured Data Dump (MSDD)},\
145
+   author       = {Khalil Hennara, Muhammad Hreden, Mohamed Motasim Hamed, Zeina Aldallal, Sara Chrouf, Safwan AlModhayan, Ahmed Bustati},\
146
+   year         = {2025},\
147
+   publisher    = {MisrajAI},\
148
+   howpublished = {\url{\[https\://huggingface.co/datasets/Misraj/msdd]\(https\://huggingface.co/datasets/Misraj/msdd)}}\
149
+ }
150
+ ```