Hennara commited on
Commit
74e3934
·
verified ·
1 Parent(s): 45368e2

Update README.md with metadata

Browse files
Files changed (1) hide show
  1. README.md +141 -150
README.md CHANGED
@@ -1,150 +1,141 @@
1
- ## **license: apache-2.0**
2
-
3
- # **📚 Misraj Structured Data Dump (MSDD)**
4
-
5
- Misraj Structured Data Dump (MSDD) is a large-scale Arabic multimodal dataset created using our **WASM pipeline**. It is extracted and filtered from the [Common Crawl](https://commoncrawl.org/) dumps and uniquely preserves the structural integrity of web content by providing markdown output. This dataset aims to address the lack of high-quality, structured multimodal data for Arabic and accelerate research in large language and multimodal models.
6
-
7
-
8
- ## **📌 Dataset Summary**
9
-
10
- - **Source:** Subset from multiple Common Crawl dumps, processed with the WASM pipeline.
11
-
12
- - **Documents:** 23 million documents.
13
-
14
- - **Timeframe:** \* 2024 Dump: Dump 10
15
-
16
- * 2025 Dump: Dump 13
17
-
18
- - **Languages:** Primarily Arabic (MSA and dialects).
19
-
20
- - **Format:** Multimodal format with interleaved text and images in Markdown.
21
-
22
- - **Domain Variety:** General web content.
23
-
24
-
25
- ## **💡 Usage**
26
-
27
- ### **📥 Loading the Dataset**
28
-
29
- from datasets import load\_dataset\
30
- \
31
- dataset = load\_dataset("Misraj/msdd")
32
-
33
-
34
- ### **📋 Example Usage**
35
-
36
- \# Access the first example\
37
- example = dataset\['train']\[0]\
38
- print(f"Text: {example\['text']}")\
39
- print(f"Images: {example\['images']}")\
40
- print(f"Captions: {example\['image\_caption']}")
41
-
42
-
43
- ## **⚙️ The WASM Processing Pipeline**
44
-
45
- The performance of large language models (LLMs) and large multimodal models (LMMs) depends heavily on the quality and scale of their pre-training datasets. For Arabic, the lack of high-quality multimodal datasets that preserve document structure has limited progress. Our **WASM pipeline** was developed to address this gap by processing Common Crawl and generating a structured, markdown-based multimodal dataset. The pipeline is designed to preserve the structural integrity of web content while maintaining flexibility for both text-only and multimodal pre-training scenarios.
46
-
47
- The core of the WASM pipeline involves careful filtering at both the paragraph and document level.
48
-
49
-
50
- ### **✅ _Paragraph-Level Filtering_**
51
-
52
- Each paragraph in the corpus undergoes the following checks:
53
-
54
- - **Character Deduplication:** Removal of repeated characters beyond a threshold.
55
-
56
- - **Word Repetition Ratio:** Filtering paragraphs with excessive word repetitions.
57
-
58
- - **Special Character Ratio:** Filtering based on the proportion of non-standard characters.
59
-
60
- - **Language Identification:** Only Arabic paragraphs are retained.
61
-
62
- - **Perplexity Scoring:** Content scored using an in-house KenLM-based model trained on Wikipedia-like pages, Arabic Twitter data, and dialectal text (e.g., Lahjawi), to remove low-quality text.
63
-
64
-
65
- ### **✅ _Document-Level Filtering_**
66
-
67
- Each full document must pass:
68
-
69
- - **Word Repetition Ratio:** Similar to paragraph level, but with different thresholds for full documents.
70
-
71
- - **Special Character Ratio:** Ensures no document is dominated by symbols, code snippets, or garbage text.
72
-
73
- - **Language Identification:** Verifies the document is primarily Arabic.
74
-
75
- - **Perplexity Score:** Documents are filtered based on perplexity thresholds to maintain fluent, natural text.
76
-
77
-
78
- ## **📂 Dataset Structure**
79
-
80
- The dataset is structured with three main columns to support multimodal tasks. The text is interleaved with image placeholders, allowing for rich text-and-image documents.
81
-
82
- - **text**: A string containing the textual content. The special token \<image> is used to denote the position where an image should be inserted.
83
-
84
- - **images**: A list of image URLs (strings). These images correspond sequentially to the \<image> tokens in the text field.
85
-
86
- - **image\_caption**: A list of strings, where each string is a caption for the corresponding image in the images list. If an image does not have a caption, the list will contain an empty string '' at that position.
87
-
88
- The dataset has the following features:
89
- ```json
90
- {\
91
-   "text": {\
92
-     "dtype": "string",\
93
-     "\_type": "Value"\
94
-   },\
95
-   "images": {\
96
-     "feature": {\
97
-       "dtype": "list",\
98
-       "\_type": "Value"\
99
-     },\
100
-     "\_type": "Sequence"\
101
-   },\
102
-   "image\_caption": {\
103
-     "feature": {\
104
-       "dtype": "list",\
105
-       "\_type": "Value"\
106
-     },\
107
-     "\_type": "Sequence"\
108
-   }\
109
- }
110
- ```
111
-
112
- ## **🚦 Quality Checks**
113
-
114
- The dataset quality was validated using:
115
-
116
- - In-house KenLM-based Arabic models for perplexity checks.
117
-
118
- - Manual inspection of samples.
119
-
120
- - A pipeline inspired by [OBELICS](https://github.com/huggingface/OBELICS), with custom enhancements.
121
-
122
- - Comparative analysis against major existing dataset processing pipelines to validate design choices.
123
-
124
-
125
- ## **🔍 Intended Use**
126
-
127
- This dataset is intended for:
128
-
129
- - Training large-scale multimodal Arabic language models.
130
-
131
- - Research on Arabic NLP, including dialect modeling and low-resource language studies.
132
-
133
-
134
- ## **🌐 Availability & Reproducibility**
135
-
136
- To support future research and ensure reproducibility, we are publicly releasing this representative dataset dump. The WASM processing pipeline for Arabic will also be made available to the community.
137
-
138
-
139
- ## **📝 Citation**
140
-
141
- If you use this dataset, please cite:
142
- ```bibtex
143
- @misc{misraj2025msdd,\
144
-   title        = {Misraj Structured Data Dump (MSDD)},\
145
-   author       = {Khalil Hennara, Muhammad Hreden, Mohamed Motasim Hamed, Zeina Aldallal, Sara Chrouf, Safwan AlModhayan, Ahmed Bustati},\
146
-   year         = {2025},\
147
-   publisher    = {MisrajAI},\
148
-   howpublished = {\url{\[https\://huggingface.co/datasets/Misraj/msdd]\(https\://huggingface.co/datasets/Misraj/msdd)}}\
149
- }
150
- ```
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - ar
5
+ tags:
6
+ - multimodal
7
+ - arabic
8
+ - common-crawl
9
+ pretty_name: Misraj Structured Data Dump (MSDD)
10
+ ---
11
+
12
+ # **📚 Misraj Structured Data Dump (MSDD)**
13
+
14
+ Misraj Structured Data Dump (MSDD) is a large-scale Arabic multimodal dataset created using our **WASM pipeline**. It is extracted and filtered from the [Common Crawl](https://commoncrawl.org/) dumps and uniquely preserves the structural integrity of web content by providing markdown output. This dataset aims to address the lack of high-quality, structured multimodal data for Arabic and accelerate research in large language and multimodal models.
15
+
16
+
17
+ ## **📌 Dataset Summary**
18
+
19
+ - **Source:** Subset from multiple Common Crawl dumps, processed with the WASM pipeline.
20
+ - **Documents:** 23 million documents.
21
+ - **Timeframe:** * 2024 Dump: Dump 10, * 2025 Dump: Dump 13
22
+ - **Languages:** Primarily Arabic (MSA and dialects).
23
+ - **Format:** Multimodal format with interleaved text and images in Markdown.
24
+ - **Domain Variety:** General web content.
25
+
26
+
27
+ ## **💡 Usage**
28
+
29
+ ### **📥 Loading the Dataset**
30
+
31
+ ```python
32
+ from datasets import load_dataset
33
+
34
+ dataset = load_dataset("Misraj/msdd")
35
+ ```
36
+
37
+ ### **📋 Example Usage**
38
+
39
+ ```python
40
+ # Access the first example
41
+ example = dataset['train'][0]
42
+ print(f"Text: {example['text']}")
43
+ print(f"Images: {example['images']}")
44
+ print(f"Captions: {example['image_caption']}")
45
+ ```
46
+
47
+ ## **⚙️ The WASM Processing Pipeline**
48
+
49
+ The performance of large language models (LLMs) and large multimodal models (LMMs) depends heavily on the quality and scale of their pre-training datasets. For Arabic, the lack of high-quality multimodal datasets that preserve document structure has limited progress. Our **WASM pipeline** was developed to address this gap by processing Common Crawl and generating a structured, markdown-based multimodal dataset. The pipeline is designed to preserve the structural integrity of web content while maintaining flexibility for both text-only and multimodal pre-training scenarios.
50
+
51
+ The core of the WASM pipeline involves careful filtering at both the paragraph and document level.
52
+
53
+
54
+ ### **✅ _Paragraph-Level Filtering_**
55
+
56
+ Each paragraph in the corpus undergoes the following checks:
57
+
58
+ - **Character Deduplication:** Removal of repeated characters beyond a threshold.
59
+ - **Word Repetition Ratio:** Filtering paragraphs with excessive word repetitions.
60
+ - **Special Character Ratio:** Filtering based on the proportion of non-standard characters.
61
+ - **Language Identification:** Only Arabic paragraphs are retained.
62
+ - **Perplexity Scoring:** Content scored using an in-house KenLM-based model trained on Wikipedia-like pages, Arabic Twitter data, and dialectal text (e.g., Lahjawi), to remove low-quality text.
63
+
64
+
65
+ ### **✅ _Document-Level Filtering_**
66
+
67
+ Each full document must pass:
68
+
69
+ - **Word Repetition Ratio:** Similar to paragraph level, but with different thresholds for full documents.
70
+ - **Special Character Ratio:** Ensures no document is dominated by symbols, code snippets, or garbage text.
71
+ - **Language Identification:** Verifies the document is primarily Arabic.
72
+ - **Perplexity Score:** Documents are filtered based on perplexity thresholds to maintain fluent, natural text.
73
+
74
+
75
+ ## **📂 Dataset Structure**
76
+
77
+ The dataset is structured with three main columns to support multimodal tasks. The text is interleaved with image placeholders, allowing for rich text-and-image documents.
78
+
79
+ - **text**: A string containing the textual content. The special token `<image>` is used to denote the position where an image should be inserted.
80
+ - **images**: A list of image URLs (strings). These images correspond sequentially to the `<image>` tokens in the text field.
81
+ - **image_caption**: A list of strings, where each string is a caption for the corresponding image in the `images` list. If an image does not have a caption, the list will contain an empty string `''` at that position.
82
+
83
+ The dataset has the following features:
84
+ ```json
85
+ {
86
+ "text": {
87
+ "dtype": "string",
88
+ "_type": "Value"
89
+ },
90
+ "images": {
91
+ "feature": {
92
+ "dtype": "list",
93
+ "_type": "Value"
94
+ },
95
+ "_type": "Sequence"
96
+ },
97
+ "image_caption": {
98
+ "feature": {
99
+ "dtype": "list",
100
+ "_type": "Value"
101
+ },
102
+ "_type": "Sequence"
103
+ }
104
+ }
105
+ ```
106
+
107
+ ## **🚦 Quality Checks**
108
+
109
+ The dataset quality was validated using:
110
+
111
+ - In-house KenLM-based Arabic models for perplexity checks.
112
+ - Manual inspection of samples.
113
+ - A pipeline inspired by [OBELICS](https://github.com/huggingface/OBELICS), with custom enhancements.
114
+ - Comparative analysis against major existing dataset processing pipelines to validate design choices.
115
+
116
+
117
+ ## **🔍 Intended Use**
118
+
119
+ This dataset is intended for:
120
+
121
+ - Training large-scale multimodal Arabic language models.
122
+ - Research on Arabic NLP, including dialect modeling and low-resource language studies.
123
+
124
+
125
+ ## **🌐 Availability & Reproducibility**
126
+
127
+ To support future research and ensure reproducibility, we are publicly releasing this representative dataset dump. The WASM processing pipeline for Arabic will also be made available to the community.
128
+
129
+
130
+ ## **📝 Citation**
131
+
132
+ If you use this dataset, please cite:
133
+ ```bibtex
134
+ @misc{misraj2025msdd,
135
+ title = {Misraj Structured Data Dump (MSDD)},
136
+ author = {Khalil Hennara, Muhammad Hreden, Mohamed Motasim Hamed, Zeina Aldallal, Sara Chrouf, Safwan AlModhayan, Ahmed Bustati},
137
+ year = {2025},
138
+ publisher = {MisrajAI},
139
+ howpublished = {\url{[https://huggingface.co/datasets/Misraj/msdd](https://huggingface.co/datasets/Misraj/msdd)}}
140
+ }
141
+ ```