Llama 3.2 DataFusion Instruct (GGUF)

This repository contains the GGUF version of the yarenty/llama32-datafusion-instruct model, quantized for efficient inference on CPU and other compatible hardware.

For full details on the model, including its training procedure, data, intended use, and limitations, please see the full model card.

Model Details

Prompt Template

This model follows the same instruction prompt template as the base model:

### Instruction:
{Your question or instruction here}

### Response:

Usage

These files are compatible with tools like llama.cpp and Ollama.

With Ollama

ollama pull jaro/llama32-datafusion-instruct
ollama run jaro/llama32-datafusion-instruct "How do I use the Ballista scheduler?"

With llama.cpp

./main -m llama32_datafusion.gguf --color -p "### Instruction:\nHow do I use the Ballista scheduler?\n\n### Response:" -n 256 --stop "### Instruction:" --stop "### Response:" --stop "### End"

Citation

If you use this model, please cite the original base model:

@misc{yarenty_2025_llama32_datafusion_instruct,
  author = {yarenty},
  title = {Llama 3.2 DataFusion Instruct},
  year = {2025},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  howpublished = {\url{https://huggingface.co/yarenty/llama32-datafusion-instruct}}
}

Contact

For questions or feedback, please open an issue on the Hugging Face repository or the source GitHub repository.

Downloads last month
9
GGUF
Model size
3.21B params
Architecture
llama
Hardware compatibility
Log In to view the estimation
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for yarenty/llama32-datafusion-instruct-gguf

Quantized
(1)
this model