annotations_creators:
- no-annotation
language:
- uz
license: apache-2.0
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
pretty_name: UzCrawl
configs:
- config_name: default
data_files:
- split: news
path: data/news-*
- split: telegram_blogs
path: data/telegram_blogs-*
dataset_info:
features:
- name: text
dtype: string
- name: timestamp
dtype: string
- name: source
dtype: string
splits:
- name: news
num_bytes: 3051717115
num_examples: 1250018
- name: telegram_blogs
num_bytes: 367462330
num_examples: 368017
download_size: 1680419450
dataset_size: 3419179445
tags:
- uz
- crawl
- telegram_blogs
Dataset Card for UzCrawl
Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://tahrirchi.uz/grammatika-tekshiruvi
- Repository: More Information Needed
- Paper: More Information Needed
- Point of Contact: More Information Needed
- Size of downloaded dataset files: 3.41 GB
- Size of the generated dataset: 1.68 GB
- Total amount of disk used: 5.1 GB
Dataset Summary
In an effort to democratize research on low-resource languages, we release UzCrawl dataset, a web and telegram crawl corpus consisting of materials from nearly 1.2 million unique sources in the Uzbek Language.
Please refer to our blogpost for further details.
P.S. We updated the dataset with 2nd version that extends the scope to new topics as well as being up to date to March 2024.
To load and use dataset, run this script:
from datasets import load_dataset
uz_crawl=load_dataset("tahrirchi/uz-crawl")
To use v1 of the dataset, run this script instead:
from datasets import load_dataset
uz_crawl=load_dataset("tahrirchi/uz-crawl", revision="b8de928a18559b2a1659124c412ae31c9c238eb8")
Dataset Structure
Data Instances
plain_text
- Size of downloaded dataset files: 3.41 GB
- Size of the generated dataset: 1.68 GB
- Total amount of disk used: 5.1 GB
An example of 'news' looks as follows.
{
'text': "O‘zbekiston Respublikasi Vazirlar Mahkamasining 2019 yil 24 iyuldagi 620-son qarori bilan tasdiqlangan «Xorijiy davlatlarda ta'lim olganlik to‘g‘risidagi hujjatlarni tan olish tartibi to‘g‘risida»gi Nizom ijrosini ta'minlash maqsadida Ta'lim sifatini nazorat qilish davlat inspeksiyasida (Toshkent shahar, Chilonzor tumani, Nurxon ko‘chasi, 21-uy) 2019 yil 9 –14 sentabr kunlari sohalar bo‘yicha sinov testlari bo‘lib o‘tishi rejalashtirilgan.\nTa'lim sifatini nazorat qilish davlat inspeksiyasi matbuot xizmati xabariga\xa0ko‘ra, «Huquqshunoslik», «Sog‘liqni saqlash va ijtimoiy ta'minot», «Iqtisodiyot», «Qishloq xo‘jaligi, muhandislik, ishlov berish va qurilish» hamda «O‘qituvchilar tayyorlash va pedagogik fanlar» sohalari bo‘yicha sinov testlari o‘tkaziladigan sanasi va sinov testida ishtirok etuvchilar ro‘yxati jadvalga muvofiq belgilanadi.\nTa'lim sifatini nazorat qilish davlat inspeksiyasi ogohlantirishicha, xorijiy davlatlarda ta'lim olganlik to‘g‘risidagi hujjatlarni tan olish uchun belgilangan sinov testlariga o‘z vaqtida kelmagan, sinov testida ishtirok etuvchilar ro‘yxatida mavjud bo‘lmagan talabgorlarga sinovlarga kirishga ruxsat etilmaydi.",
'timestamp': '2019-06-09',
'source': 'https://kun.uz/uz/news/2019/09/06/xorijda-talim-olganlik-togrisidagi-hujjatlarni-tan-olish-uchun-testlar-otkaziladigan-kunlar-malum-boldi'
}
Data Fields
The data fields are the same among all splits.
text
: astring
feature that contains text.timestamp
: astring
feature that contains timestamp of the material.source
: astring
feature that contains url of the material.
Data Splits
split_name | num_examples |
---|---|
news | 1250018 |
telegram_blogs | 368017 |
Dataset Creation
The news portion have been crawled from 57 different websites using Scrapy framework. And telegram_blogs portion is consisted of manually curated texts from 128 high-quality Telegram channels.
Citation
Please cite this model using the following format:
@online{Mamasaidov2023UzCrawl,
author = {Mukhammadsaid Mamasaidov and Abror Shopulatov},
title = {UzCrawl dataset},
year = {2023},
url = {https://huggingface.co/datasets/tahrirchi/uz-crawl},
note = {Accessed: 2023-10-28}, % change this date
urldate = {2023-10-28} % change this date
}
Gratitude
We are thankful to these awesome organizations and people for helping to make it happen:
- Asadbek Kiyomov: for his works on the beginning of the project.
- Ilya Gusev: for his advise throughout the process
- David Dale: for his advise throughout the process
Contacts
We believe that this work will inspire all enthusiasts around the world to open the hidden beauty of low-resource languages, in particular of Uzbek.
For further development and issues about the dataset, please use [email protected] or [email protected] to contact.