Datasets:
The dataset viewer is not available for this split.
Error code: FeaturesError Exception: ArrowInvalid Message: Schema at index 1 was different: duplicate_urls: list<item: string> stats: struct<total_processed: int64, total_with_text: int64, unique_texts: int64, duplicates_removed: int64, duplicate_rate: string> vs url: string text: string text_length: int64 word_count: int64 Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 228, in compute_first_rows_from_streaming_response iterable_dataset = iterable_dataset._resolve_features() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 3357, in _resolve_features features = _infer_features_from_batch(self.with_format(None)._head()) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2111, in _head return next(iter(self.iter(batch_size=n))) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 2315, in iter for key, example in iterator: File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1856, in __iter__ for key, pa_table in self._iter_arrow(): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 1878, in _iter_arrow yield from self.ex_iterable._iter_arrow() File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/iterable_dataset.py", line 520, in _iter_arrow yield new_key, pa.Table.from_batches(chunks_buffer) File "pyarrow/table.pxi", line 4116, in pyarrow.lib.Table.from_batches File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Schema at index 1 was different: duplicate_urls: list<item: string> stats: struct<total_processed: int64, total_with_text: int64, unique_texts: int64, duplicates_removed: int64, duplicate_rate: string> vs url: string text: string text_length: int64 word_count: int64
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Georgian URL-Filtered Web Text Corpus
This dataset contains high-quality Georgian-language text extracted from Common Crawl. It was created to support research and model training.
Dataset Summary
The corpus is built from Georgian web pages identified using Common Crawl metadata. HTML content was extracted and cleaned using trafilatura
, followed by quality filtering (e.g., language ID checks, text heuristics) and document-level deduplication with MinHash.
Structure
Each entry contains:
url
: Source URLtext
: Cleaned plain text content
Processing Steps
- Filtered Common Crawl WARC indexes for Georgian-language pages
- Downloaded and parsed HTML from selected WARC records
- Extracted visible text using
trafilatura
- Applied quality filters to remove low-quality content
- Removed near-duplicates using MinHash
- Exported as a cleaned JSON/JSONL corpus
Intended Use
This dataset is suitable for:
- Pretraining/fine-tuning Georgian language models
- Text classification or generation tasks
- Language modeling and web-based NLP benchmarks
Limitations
Some noise may persist due to the nature of web data. Use responsibly, especially when training generative models. The dataset may reflect social biases present in online content.
- Downloads last month
- 41