Uploaded model

  • Compute sponsored by: Nvidia, Arrow ECS Denmark through Danish Data Science Community
  • Developed by: ThatsGroes
  • License: apache-2.0
  • Finetuned from model : meta-llama/Llama-3.1-8B-Instruct

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Fine tuned LoRA adapter in fp16 for 1 epoch on kobprof/skolegpt-instruct and Mabeck/Danish-SlimOrca with rank=alpha=64

[codecarbon INFO @ 10:41:13] Energy consumed for RAM : 2.822621 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 10:41:13] Energy consumed for all GPUs : 4.379013 kWh. Total GPU Power : 260.7733742516678 W [codecarbon INFO @ 10:41:13] Energy consumed for all CPUs : 0.635721 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 10:41:13] 7.837356 kWh of electricity used since the beginning.

Downloads last month
4
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ThatsGroes/Llama-3.1-8B-Instruct-SkoleGPT-DaSlimOrca

Finetuned
(770)
this model

Datasets used to train ThatsGroes/Llama-3.1-8B-Instruct-SkoleGPT-DaSlimOrca