Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

image/png

Dataset: LLaDA-Sample-ES

Base: crscardellino/spanish_billion_words

Purpose: Training LLaDA (Large Language Diffusion Models)

Preprocessing

  • Tokenizer: GSAI-ML/LLaDA-8B-Instruct
  • Chunking: Up to 4,096 tokens per chunk (1% of chunks randomly sized between 1–4,096 tokens)
  • Noisy masking: Applied with noise factor ε = 1×10⁻³
  • Fields per chunk (PyTorch tensors):
    • input_ids
    • noisy_input_ids
    • mask
    • t (time scalar)

Statistics

  • Total chunks: ~ 652,089
  • Shards: 65 .pt files
  • Chunks per file: 10,000
  • Average file size: ~702–708 MB
  • Total size: ~46 GB

Usage

This dataset is used for training in the LLaDA-from-scratch GitHub repository, where you’ll find the full data pipeline and training scripts.

Downloads last month
296

Models trained or fine-tuned on Fredtt3/LLaDA-Sample-ES