Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -9,10 +9,125 @@ size_categories:
|
|
9 |
- 10M<n<100M
|
10 |
dataset_info:
|
11 |
splits:
|
12 |
-
|
13 |
-
|
|
|
14 |
---
|
15 |
|
16 |
-
|
17 |
|
18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
- 10M<n<100M
|
10 |
dataset_info:
|
11 |
splits:
|
12 |
+
- name: train
|
13 |
+
num_examples: 96918916
|
14 |
+
license: cc0-1.0
|
15 |
---
|
16 |
|
17 |
+
<h1 align="center">Kobza</h1>
|
18 |
|
19 |
+
<h2 align="center">On the Path to Make Ukrainian a High-Resource Language</h2>
|
20 |
+
|
21 |
+
**Kobza** is the largest publicly available Ukrainian corpus to date, comprising nearly **60 billion tokens** across **97 million documents**. It is designed to support pretraining and fine-tuning of large language models (LLMs) in Ukrainian, as well as multilingual settings where Ukrainian is underrepresented.
|
22 |
+
|
23 |
+
## 🧾 Dataset Summary
|
24 |
+
|
25 |
+
Kobza aggregates high-quality Ukrainian text from a wide range of web sources and applies rigorous deduplication steps to ensure high utility for language modeling tasks. A substantial share of tokens resides in long documents, making it well-suited for transformer models with extended sequence lengths. Each document includes metadata such as the source, subsource, timestamp, and URL for flexible filtering.
|
26 |
+
|
27 |
+
## 📊 Dataset Statistics
|
28 |
+
|
29 |
+
| Subcorpus | Documents | Tokens |
|
30 |
+
|------------------|-------------|------------------|
|
31 |
+
| CulturaX | 24,942,577 | 15,002,455,535 |
|
32 |
+
| FineWeb 2 | 32,124,035 | 19,114,177,138 |
|
33 |
+
| HPLT 2.0 | 26,244,485 | 20,709,322,905 |
|
34 |
+
| UberText 2.0 | 6,431,848 | 2,904,208,874 |
|
35 |
+
| Ukrainian News | 7,175,971 | 1,852,049,111 |
|
36 |
+
| **Total** | **96,918,916** | **59,582,213,563** |
|
37 |
+
|
38 |
+
- 50% of tokens reside in documents longer than 1,156 tokens
|
39 |
+
- 10% of tokens reside in documents longer than 8,765 tokens
|
40 |
+
- Stored in zstd-compressed Parquet format (~150GB)
|
41 |
+
- If converted to Snappy compression, it takes around 474GB of space
|
42 |
+
|
43 |
+
## 🌍 Call for Inclusion
|
44 |
+
|
45 |
+
We encourage developers of multilingual large language models (LLMs) to include **Ukrainian** in their training pipelines. Ukrainian remains underrepresented in most multilingual corpora, despite a large speaker base and growing demand for high-quality NLP tools.
|
46 |
+
|
47 |
+
**Kobza** provides a strong foundation for improving Ukrainian language support in multilingual models — both in quality and scale. We invite researchers and organizations to:
|
48 |
+
|
49 |
+
- Incorporate Kobza in multilingual data mixtures
|
50 |
+
- Evaluate Ukrainian capabilities in their models
|
51 |
+
- Contribute back improvements and tools for better Ukrainian language modeling
|
52 |
+
|
53 |
+
We are working on a thoroughly cleaned version of **Kobza** with document-level quality scoring and plan to release it publicly too!
|
54 |
+
|
55 |
+
## 🔍 Deduplication
|
56 |
+
|
57 |
+
Kobza employs a **two-stage deduplication pipeline**:
|
58 |
+
|
59 |
+
- **Metadata-based**: URL/timestamp-based filtering to remove exact and near-duplicates.
|
60 |
+
- **MinHashLSH**: 5-gram similarity with 0.7 threshold to eliminate fuzzy duplicates.
|
61 |
+
|
62 |
+
This removes ~40% of duplicates across the pipeline.
|
63 |
+
|
64 |
+
## 🔬 Quality Control
|
65 |
+
|
66 |
+
All sources underwent heuristic filtering or curated collection. However, there is no unified quality score for Ukrainian yet. Some noise may remain, especially from multilingual web corpora. Future versions plan to incorporate a Ukrainian document quality scorer.
|
67 |
+
|
68 |
+
## 🗂 Data Structure
|
69 |
+
|
70 |
+
Each entry includes:
|
71 |
+
- `text`: raw document content
|
72 |
+
- `source`: name of the corpus (e.g., FineWeb 2)
|
73 |
+
- `subsource`: identifier of crawl dump, subcorpus or any different subdivision
|
74 |
+
- `timestamp`: publication time (if missing, 1970-01-01T00:00:00Z is used)
|
75 |
+
- `url`: original document URL (if available)
|
76 |
+
|
77 |
+
## 🧪 Example Usage
|
78 |
+
|
79 |
+
```python
|
80 |
+
from datasets import load_dataset
|
81 |
+
|
82 |
+
dataset = load_dataset("Goader/kobza", split="train")
|
83 |
+
print(dataset[0])
|
84 |
+
```
|
85 |
+
|
86 |
+
## 📚 Citation
|
87 |
+
|
88 |
+
BibTeX citation will be added soon!
|
89 |
+
|
90 |
+
## ⚠️ Limitations
|
91 |
+
|
92 |
+
May contain biased or low-quality content from the web (e.g., spam, machine-translated text).
|
93 |
+
|
94 |
+
Some domains (e.g., legal, fiction) are underrepresented compared to news/web content.
|
95 |
+
|
96 |
+
Lacks a robust document-level quality metric specific to Ukrainian.
|
97 |
+
|
98 |
+
## 📜 License
|
99 |
+
|
100 |
+
These data are released under this licensing scheme:
|
101 |
+
|
102 |
+
* We do not own any of the text from which these data has been extracted.
|
103 |
+
* We license the actual packaging, the metadata and the annotations of these data under the [Creative Commons CC0 license](http://creativecommons.org/publicdomain/zero/1.0/) ("no rights reserved")
|
104 |
+
|
105 |
+
### Notice and take down policy
|
106 |
+
|
107 |
+
**Notice:** Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
|
108 |
+
|
109 |
+
* Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
|
110 |
+
* Clearly identify the copyrighted work claimed to be infringed.
|
111 |
+
* Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
|
112 |
+
* You can reach us at [email protected]
|
113 |
+
|
114 |
+
**Take down:** We will comply to legitimate requests by removing the affected sources from the next release of the corpora.
|
115 |
+
|
116 |
+
For additional information, please refer to licenses of Kobza's main components:
|
117 |
+
|
118 |
+
* [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX#license-information)
|
119 |
+
* [FineWeb 2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-2#licensing-information)
|
120 |
+
* [HPLT 2.0](https://hplt-project.org/datasets/v2.0)
|
121 |
+
* [UberText 2.0](https://lang.org.ua/en/ubertext/)
|
122 |
+
* [Ukrainian News](https://huggingface.co/datasets/zeusfsx/ukrainian-news#license)
|
123 |
+
|
124 |
+
|
125 |
+
## 🙏 Acknowledgements
|
126 |
+
|
127 |
+
Dataset processing was performed using PLGrid’s HPC infrastructure (Helios Cluster at ACK Cyfronet, Krakow).
|
128 |
+
|
129 |
+
---
|
130 |
+
|
131 |
+
🧱 Dataset: [Goader/kobza](https://huggingface.co/datasets/Goader/kobza)
|
132 |
+
<br>🧠 Model: [Goader/modern-liberta-large](https://huggingface.co/Goader/modern-liberta-large)
|
133 |
+
<br>🛠 Source code: [github.com/Goader/ukr-lm](https://github.com/Goader/ukr-lm)
|