gpt-oss-20b-local-test-small-shard - GGUF
This model was finetuned and converted to GGUF format using Unsloth.
Example usage:
- For text only LLMs: llama-cli --hf repo_id>/model_name -p "why is the sky blue?"
- For multimodal models: llama-mtmd-cli -m model_name.gguf --mmproj mmproj_file.gguf
Available Model files:
gpt-oss-20b.MXFP4.gguf
Ollama
An Ollama Modelfile is included for easy deployment.
- Downloads last month
- 133
Hardware compatibility
Log In
to view the estimation
We're not able to determine the quantization variants.
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support