File size: 596 Bytes
e8ee45f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
library_name: gguf
tags:
- gguf
- quantized
- model conversion
---

# PA-stage2-Qwen7B-147-GGUF

GGUF conversion of PA-stage2-Qwen7B-147

## Model Details
- **Format**: GGUF
- **Original Model**: [Specify original model here]
- **Conversion**: Converted using llama.cpp's convert_hf_to_gguf.py

## Usage

```bash
# Download with huggingface-hub
huggingface-cli download Argonaut790/PA-stage2-Qwen7B-147-GGUF --local-dir ./model

# Use with llama.cpp
./main -m PA-stage2-Qwen7B-7.6B-147-F16.gguf -p "Your prompt here"
```

## Files
- `PA-stage2-Qwen7B-7.6B-147-F16.gguf`: Main GGUF model file