PA-stage2-Qwen7B-147-GGUF
GGUF conversion of PA-stage2-Qwen7B-147
Model Details
- Format: GGUF
- Original Model: [Specify original model here]
- Conversion: Converted using llama.cpp's convert_hf_to_gguf.py
Usage
# Download with huggingface-hub
huggingface-cli download Argonaut790/PA-stage2-Qwen7B-147-GGUF --local-dir ./model
# Use with llama.cpp
./main -m PA-stage2-Qwen7B-7.6B-147-F16.gguf -p "Your prompt here"
Files
PA-stage2-Qwen7B-7.6B-147-F16.gguf
: Main GGUF model file
- Downloads last month
- 2
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support