Llama3-ChatQA-1.5-8B-lora
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from nvidia/Llama3-ChatQA-1.5-8B and uses meta-llama/Meta-Llama-3-8B as a base.
Parameters
The following command was used to extract this LoRA adapter:
mergekit-extract-lora meta-llama/Meta-Llama-3-8B nvidia/Llama3-ChatQA-1.5-8B OUTPUT_PATH --no-lazy-unpickle --rank=64
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
Model tree for beratcmn/Llama3-ChatQA-1.5-8B-lora
Base model
meta-llama/Meta-Llama-3-8B