KRONOS 8B V1 P1

This is a merge of Meta Llama 3.1 Instruct and the "Not so Bright" LORA, created using llm-tools.

The primary purpose of this model is to be merged into other models in the same family using the TIES merge method.

Creating quants for this is entirely unnecessary.

Merge Details

Configuration

The following Bash command was used to produce this model:

python /llm-tools/merge-lora.py -m unsloth/Meta-Llama-3.1-8B-Instruct -l yuriachermann/Not-so-bright-AGI-Llama3.1-8B-UC200k-v2
Downloads last month
14
Safetensors
Model size
8.03B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for T145/KRONOS-8B-V1-P1

Dataset used to train T145/KRONOS-8B-V1-P1