image/png

New checkpoint trained on an NVIDIA H100 for 8,000 steps and 65,536,000 tokens

It is not yet a competent model because it does not meet the minimum training requirement of 20-30 tokens per parameter. However, it can give us a better idea of how a better-trained model would perform.

If you want to try how to use it here is a file of how to use it in test_gen.py Or using this Google Colab notebook

Example of the results it gives:

image/png

For those who want to train and get the correct format to be able to load it with transformers, everything needed is in pre_trainv2.py of the project repo

Downloads last month
83
Safetensors
Model size
310M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train Fredtt3/LLaDA-100M-Test