CCRss's picture
Update README.md
9a0cfbd verified
metadata
dataset_info:
  features:
    - name: audio
      dtype: audio
    - name: transcript
      dtype: string
    - name: source
      dtype: string
  splits:
    - name: test
      num_bytes: 980296593
      num_examples: 9352
    - name: validation
      num_bytes: 899503003
      num_examples: 8686
    - name: train
      num_bytes: 78822585753
      num_examples: 627822
  download_size: 80661360008
  dataset_size: 80702385349
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
      - split: validation
        path: data/validation-*
      - split: train
        path: data/train-*
task_categories:
  - automatic-speech-recognition
language:
  - kk
pretty_name: 🎤🇰🇿 kazakh_speech_corpus_2 📚
size_categories:
  - 100K<n<1M

Kazakh_speech_dataset_2

This dataset contains Kazakh_speech_dataset_2 from ISSAI but in parquet format.

Dataset info

  1. 645,860 Utterances
  2. 1194 Hours in total
  3. Sources in each split:
test : {'tv_news', 'crowdsourced', 'radio', 'talkshow', 'parliament', 'tts', 'podcasts'}
train : {'tv_news', 'crowdsourced', 'radio', 'talkshow', 'parliament', 'tts', 'podcasts'}
validation : {'tv_news', 'crowdsourced', 'radio', 'talkshow', 'parliament', 'tts','podcasts'}

Guides

Load data 1

Replace the export HF_HOME with your HF_HOME path

from datasets import load_dataset
# export HF_HOME="/data/vladimir_albrekht/hf_cache"
ds = load_dataset("SRP-base-model-training/kazakh_speech_corpus_2") # split ='test' or 'validation' or 'train'

Load data 2 patially test/train/validation

data_files = [f for f in files if f.startswith('')] # to download something specific use this (data/test or data/train or data/validation)

  1. Step download subset test/train/validation
from huggingface_hub import hf_hub_download, login, HfApi
import os
import dotenv
dotenv.load_dotenv()
login(token=os.getenv("HF_TOKEN"))
repo_id = "SRP-base-model-training/kazakh_speech_corpus_2"
local_dir = "/data/vladimir_albrekht/asr/kazakh_speech_corpus_2"
def download_all_files(repo_id, local_dir):
    print(f"Downloading files from {repo_id}...")
    os.makedirs(local_dir, exist_ok=True)
    
    api = HfApi()
    files = api.list_repo_files(repo_id, repo_type="dataset")
    
    data_files = [f for f in files if f.startswith('')] # to download something specific use this (data/test or data/train or data/validation)
    
    for filename in data_files:
        try:
            hf_hub_download(
                repo_id=repo_id,
                filename=filename,
                repo_type="dataset",
                local_dir=local_dir
            )
            print(f"Downloaded {filename}")
        except Exception as e:
            print(f"Error downloading {filename}: {e}")
download_all_files(repo_id, local_dir)
  1. Step load data with ds as usual.
from datasets import load_dataset
ds = load_dataset("/data/vladimir_albrekht/asr/kazakh_speech_corpus_2")