YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
LoRA adapter based on GradientAI's 1M context Llama-3 8B Instruct finetune. I found that rank 1024 is not sufficient to capture the delta weights in the q_proj and o_proj, so I've created seperate adapters for those modules vs the k-v projection modules.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.