Uploaded model

  • Compute sponsored by: Nvidia and Arrow ECS Denmark through Danish Data Science Community
  • Developed by: ThatsGroes
  • License: apache-2.0
  • Finetuned from model : meta-llama/Llama-3.1-70B-Instruct

LoRA adapter on Llama-3.1-70b loaded in 4-bit. Trained for 1 epoch with rank=lora_alpha=8

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

We ended up using 62.52 GB GPU memory (79.00%), of which 23.83 GB (30.12%) was used for LoRa.

[codecarbon INFO @ 11:07:59] Energy consumed for RAM : 2.574882 kWh. RAM Power : 188.78840446472168 W [codecarbon INFO @ 11:07:59] Energy consumed for all GPUs : 4.045188 kWh. Total GPU Power : 270.22211938762564 W [codecarbon INFO @ 11:07:59] Energy consumed for all CPUs : 0.579916 kWh. Total CPU Power : 42.5 W [codecarbon INFO @ 11:07:59] 7.199986 kWh of electricity used since the beginning.

Downloads last month
2
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for ThatsGroes/Llama-3.1-70B-Instruct-SkoleGPT

Finetuned
(58)
this model
Quantizations
1 model

Dataset used to train ThatsGroes/Llama-3.1-70B-Instruct-SkoleGPT