Jupyter Notebook: Unsloth fine tuning for Ollama (Qwen2.5 Instruct 32B, fine tuned with an Alpaca-formated json data file)

Jupyter notebook for fine tuning the Qwen2.5 Instruct 32B model for Ollama with Unsloth locally in Windows WSL. Runs successfully with an Alpaca-formatted json data file (my blog posts w/ generated prompts) on a Nvidia Geforce RTX 4090.

Ollama_+_Unsloth_+_Llama_3_+_Alpaca.ipynb

Jupyter notebook to fine tune a model locally on an Alpaca-formatted json data file. Edited from notebook provided by Unsloth to work locally.

data.json

Alpaca-formatted json file used to fine tune.

lora_model folder

Contains generated LORA model files.

model folder

Contains generated fine tuned model (4-bit quantized gguf) and Modelfile (needed to create model in Ollama).

Downloads last month
4
GGUF
Model size
32.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support