Add README
Browse files
README.md
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: gguf
|
3 |
+
tags:
|
4 |
+
- gguf
|
5 |
+
- quantized
|
6 |
+
- model conversion
|
7 |
+
---
|
8 |
+
|
9 |
+
# PA-stage2-Qwen7B-147-GGUF
|
10 |
+
|
11 |
+
GGUF conversion of PA-stage2-Qwen7B-147
|
12 |
+
|
13 |
+
## Model Details
|
14 |
+
- **Format**: GGUF
|
15 |
+
- **Original Model**: [Specify original model here]
|
16 |
+
- **Conversion**: Converted using llama.cpp's convert_hf_to_gguf.py
|
17 |
+
|
18 |
+
## Usage
|
19 |
+
|
20 |
+
```bash
|
21 |
+
# Download with huggingface-hub
|
22 |
+
huggingface-cli download Argonaut790/PA-stage2-Qwen7B-147-GGUF --local-dir ./model
|
23 |
+
|
24 |
+
# Use with llama.cpp
|
25 |
+
./main -m PA-stage2-Qwen7B-7.6B-147-F16.gguf -p "Your prompt here"
|
26 |
+
```
|
27 |
+
|
28 |
+
## Files
|
29 |
+
- `PA-stage2-Qwen7B-7.6B-147-F16.gguf`: Main GGUF model file
|