|
--- |
|
library_name: gguf |
|
tags: |
|
- gguf |
|
- quantized |
|
- model conversion |
|
--- |
|
|
|
# PA-stage2-Qwen7B-147-GGUF |
|
|
|
GGUF conversion of PA-stage2-Qwen7B-147 |
|
|
|
## Model Details |
|
- **Format**: GGUF |
|
- **Original Model**: [Specify original model here] |
|
- **Conversion**: Converted using llama.cpp's convert_hf_to_gguf.py |
|
|
|
## Usage |
|
|
|
```bash |
|
# Download with huggingface-hub |
|
huggingface-cli download Argonaut790/PA-stage2-Qwen7B-147-GGUF --local-dir ./model |
|
|
|
# Use with llama.cpp |
|
./main -m PA-stage2-Qwen7B-7.6B-147-F16.gguf -p "Your prompt here" |
|
``` |
|
|
|
## Files |
|
- `PA-stage2-Qwen7B-7.6B-147-F16.gguf`: Main GGUF model file |
|
|