The dataset viewer is not available for this split.
Error code: StreamingRowsError Exception: CastError Message: Couldn't cast text: string __index_level_0__: int64 -- schema metadata -- huggingface: '{"info": {"features": {"text": {"dtype": "string", "_type":' + 71 to {'text': Value('string')} because column names don't match Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise return get_rows( File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator return func(*args, **kwargs) File "/src/services/worker/src/worker/utils.py", line 77, in get_rows rows_plus_one = list(itertools.islice(ds, rows_max_number + 1)) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2361, in __iter__ for key, example in ex_iterable: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1882, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1905, in _iter_arrow for key, pa_table in self.ex_iterable._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 499, in _iter_arrow for key, pa_table in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 346, in _iter_arrow for key, pa_table in self.generate_tables_fn(**gen_kwags): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 106, in _generate_tables yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 73, in _cast_table pa_table = table_cast(pa_table, self.info.features.arrow_schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2272, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2218, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast text: string __index_level_0__: int64 -- schema metadata -- huggingface: '{"info": {"features": {"text": {"dtype": "string", "_type":' + 71 to {'text': Value('string')} because column names don't match
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Tunisian Derja Unified Raw Corpus Dataset Description
Repository: hamzabouajila/tunisian-derja-unified-raw-corpus Paper: Not yet published; dataset card serves as primary documentation Point of Contact: Hamza Bouajila License: CC-BY-SA-4.0
Dataset Summary The Tunisian Derja Unified Raw Corpus is a comprehensive collection of ~802,659 text examples in Tunisian Arabic (Derja), a low-resource dialect of Arabic widely spoken in Tunisia. This raw corpus aggregates data from multiple sources, including social media, conversational transcripts, chatbot dialogues, and other publicly available Derja datasets. It has been deduplicated across sources to reduce redundancy, offering a diverse and large-scale resource for natural language processing (NLP) and linguistic research. The dataset is provided in its raw form, preserving the natural variations of Derja, including code-switching with English and French. A curated (cleaned and normalized) version is in progress to enhance usability for task-specific applications. Supported Tasks and Leaderboards
Text Generation: Pretraining or fine-tuning language models (e.g., adapting BERT or GPT-style models for Derja). Text Classification: Sentiment analysis, topic modeling, or intent detection using Derja texts (requires additional annotations for some tasks). Translation: Machine translation between Derja and Modern Standard Arabic, English, or French. Linguistic Research: Analyzing dialectal variations, orthography, or code-switching patterns (Arabic/English/French). Speech Recognition: Adapting automatic speech recognition (ASR) systems for Derja (if paired with audio data).
No leaderboards are currently associated with this dataset. Contributions to benchmarking are welcome. Languages
Primary Language: Tunisian Arabic (Derja), written in Arabic script with some Latin-script transliterations. Code-Switching: Includes texts with Arabic/English and Arabic/French mixing, reflecting natural usage in Tunisian contexts. Language Detection: Preliminary analysis suggests ~80-90% of texts are predominantly Arabic (Derja); non-Arabic content may be present due to code-switching or source diversity.
Dataset Structure Data Instances Each instance contains a single field:
text: A string containing a Derja text sample (e.g., tweet, dialogue, or narrative).
Example: { "text": "شد إطفل من يدو وشق بيه إلكياس" }
Data Fields
text: String, the raw text in Tunisian Derja, potentially including code-switched English/French or Latin-script transliterations.
Data Splits
Train: 802,659 examples (entire dataset).
Future curated versions may include train/validation/test splits. Dataset Creation Curation Rationale Tunisian Derja is underrepresented in NLP resources compared to Modern Standard Arabic, limiting the development of dialect-specific models. This corpus unifies multiple Derja datasets to create a large-scale, deduplicated resource for researchers and practitioners. The raw format preserves natural variations, while a curated version is planned to address noise and inconsistencies. Source Data The corpus aggregates the following datasets from Hugging Face:
linagora/Tunisian_Derja_Dataset (all 12 configurations, e.g., Derja_tunsi, TunSwitchCodeSwitching) AzizBelaweid/Tunisian_Language_Dataset (mixed-domain texts) arbml/Tunisian_Dialect_Corpus (social media, primarily tweets) hamzabouajila/Sample_Tunisiya_Dataset (search results) abdouuu/tunisian_chatbot_data (conversational dialogues) khaled123/tuniset (transcripts)
Additional sources include news and personal communications, merged into a single text column and deduplicated. Initial Data Deduplication
Process: Exact duplicates were removed across all sources using pandas’ drop_duplicates on the text column. Result: Reduced redundancy, yielding ~802,659 unique examples. Note: Near-duplicates or noisy texts (e.g., URLs, emojis) may remain; these will be addressed in the curated version.
Considerations for Using the Data Social Impact of Dataset This dataset enables NLP advancements for Tunisian Derja, supporting applications like culturally relevant chatbots, sentiment analysis for Tunisian social media, and dialect-specific ASR. It promotes inclusivity by providing resources for a low-resource dialect, potentially benefiting Tunisian communities and researchers. Discussion of Biases
Source Bias: Social media sources (e.g., tweets) may overrepresent informal or negative sentiment (common in arbml’s corpus). Code-Switching: Presence of English/French may skew models if not filtered for Derja-only tasks. Orthography: Mixed Arabic/Latin scripts may require normalization for consistent processing.
Users should analyze the dataset for task-specific biases (e.g., using toxicity detection tools like detoxify). Other Known Limitations
Raw Nature: Contains potential noise (e.g., URLs, emojis, short texts <10 characters). Lack of Annotations: No labels for tasks like sentiment or intent; users must add annotations for supervised learning. Language Purity: Some non-Derja texts (e.g., pure English/French) due to code-switching sources.
Additional Information Dataset Curators
Hamza Bouajila (primary curator, contact via Hugging Face)
Licensing Information Licensed under CC-BY-SA-4.0, allowing use with attribution and share-alike requirements. Citation Information If you use this dataset, please cite: @dataset{bouajila2025tunisian, author = {Hamza Bouajila}, title = {Tunisian Derja Unified Raw Corpus}, year = {2025}, url = {https://huggingface.co/datasets/hamzabouajila/tunisian-derja-unified-raw-corpus} }
Contributions Contributions are welcome! Please submit pull requests or issues on the Hugging Face repository. Feedback on the planned curated version (e.g., desired preprocessing or annotations) is encouraged. Future Work A curated version is in development, expected by Q1 2026. Planned improvements include:
Noise removal (e.g., URLs, emojis, short texts). Language filtering to prioritize Derja (using tools like CAMeL Tools). Text normalization (e.g., unifying Arabic script, removing diacritics). Train/validation/test splits for benchmarking. Optional annotations for tasks like sentiment analysis.
Stay tuned for updates on the curated dataset: hamzabouajila/tunisian-derja-unified-cleaned. Dataset Statistics
Size: ~802,659 examples Download Size: ~344 MB Storage Size: ~682 MB Preliminary Quality Metrics (based on initial analysis): Noise Fraction: ~5-10% (short texts, URLs, emojis; to be refined in curated version) Language Distribution: ~80-90% Arabic (Derja), with some English/French code-switching Token Diversity: To be computed; expected high due to diverse sources
For detailed quality analysis, see the evaluation script (to be shared with curated version). Acknowledgements Thanks to the creators of the source datasets: Linagora, Aziz Belaweid, arbml, abdouuu, khaled123, and others. This work builds on their contributions to Tunisian Derja NLP.
- Downloads last month
- 29