Text Generation
llama
pretrained
llama-3
openllm-france

Model Card

This repository contains checkpoints (splitted for 512 GPUs) in DeepSpeed format for the Lucie-7B model, which was trained using this repository of code based on a fork of Megatron-Deepspeed.

Each checkpoint is in a subbranch (revision), which names specifies the number of training steps. For instance step0400000 corresponds to the checkpoint after 4M training steps.

Those checkpoints are provided so that the model can be retrained from a given point.

Contact

[email protected]

Downloads last month
7
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Dataset used to train OpenLLM-France/Lucie-7B-optimizer-states-512GPU

Collection including OpenLLM-France/Lucie-7B-optimizer-states-512GPU